* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide
@ 2003-12-13 3:15 Manfred Spraul
0 siblings, 0 replies; 14+ messages in thread
From: Manfred Spraul @ 2003-12-13 3:15 UTC (permalink / raw)
To: Rusty Russell; +Cc: linux-kernel
Hi Rusty,
From Chapter 4.1:
> |spin_lock_irqsave()| (include/linux/spinlock.h) is a variant which
> saves whether interrupts were on or off in a flags word, which is
> passed to |spin_unlock_irqrestore()|. This means that the same code
> can be used inside an hard irq handler (where interrupts are already
> off) and in softirqs (where the irq disabling is required).
Interrupts are typically on within the hard irq handler.
spin_lock() is usually ok because an interrupt handler is never
reentered. Thus if a lock is only accessed from a single irq handler,
then spin_lock() is the faster approach. That's why many nic drivers use
spin_lock instead of spin_lock_irqsave() in their irq handlers.
OTHO: if a driver lock is a global resource that is used from different
irqs, then it must either use _irqsave(), or set SA_INTERRUPT.
Examples: rtc_lock relies on SA_INTERRUPT: it's touched from the rtc irq
and the timer irq path, and both rtc and timer set SA_INTERRUPT.
I assume ide relies on _irqsave(), but the code is too difficult to follow.
Btw, perhaps you could add the 2nd synchronization approach for
interrupts: if it's an extremely rare event, then no lock at all in the
irq handler (no reentrancy guaranteed by the kernel), and
spin_lock+disable_irq() in the softirq/tasklet. My network drivers use
that to synchronize packet rx with close.
--
Manfred
^ permalink raw reply [flat|nested] 14+ messages in thread
* [DOCUMENTATION] Revised Unreliable Kernel Locking Guide @ 2003-12-12 5:24 Rusty Russell 2003-12-12 15:44 ` Dave Jones 2003-12-12 19:35 ` Paul E. McKenney 0 siblings, 2 replies; 14+ messages in thread From: Rusty Russell @ 2003-12-12 5:24 UTC (permalink / raw) To: linux-kernel; +Cc: Paul E. McKenney OK, I've put the html version up for your reading pleasure: the diff is quite extensive and hard to read. http://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/ Feedback welcome, Rusty. -- Anyone who quotes me in their sig is an idiot. -- Rusty Russell. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 5:24 Rusty Russell @ 2003-12-12 15:44 ` Dave Jones 2003-12-12 16:25 ` Keith Owens ` (2 more replies) 2003-12-12 19:35 ` Paul E. McKenney 1 sibling, 3 replies; 14+ messages in thread From: Dave Jones @ 2003-12-12 15:44 UTC (permalink / raw) To: Rusty Russell; +Cc: linux-kernel, Paul E. McKenney On Fri, Dec 12, 2003 at 04:24:18PM +1100, Rusty Russell wrote: > OK, I've put the html version up for your reading pleasure: the diff > is quite extensive and hard to read. > > http://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/ > > Feedback welcome, Hi Rusty, Might be worth mentioning in the Per-CPU data section that code doing operations on CPU registers (MSRs and the like) needs to be protected by an explicit preempt_disable() / preempt_enable() pair if it's doing operations that it expects to run on a specific CPU. For examples, see arch/i386/kernel/msr.c & cpuid.c Dave ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 15:44 ` Dave Jones @ 2003-12-12 16:25 ` Keith Owens 2003-12-12 18:25 ` Dave Jones 2003-12-12 21:05 ` Rob Love 2003-12-15 2:28 ` Rusty Russell 2 siblings, 1 reply; 14+ messages in thread From: Keith Owens @ 2003-12-12 16:25 UTC (permalink / raw) To: Rusty Russell; +Cc: linux-kernel On Fri, 12 Dec 2003 15:44:01 +0000, Dave Jones <davej@redhat.com> wrote: > Might be worth mentioning in the Per-CPU data section that code doing >operations on CPU registers (MSRs and the like) needs to be protected >by an explicit preempt_disable() / preempt_enable() pair if it's doing >operations that it expects to run on a specific CPU. Also calls to smp_call_function() need to be wrapped in preempt_disable, plus any work that is done on the current cpu before/after calling a function on the other cpus. Lack of preempt disable could result in the operation being done twice on one cpu and not at all on another. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 16:25 ` Keith Owens @ 2003-12-12 18:25 ` Dave Jones 2003-12-13 0:28 ` Keith Owens 0 siblings, 1 reply; 14+ messages in thread From: Dave Jones @ 2003-12-12 18:25 UTC (permalink / raw) To: Keith Owens; +Cc: Rusty Russell, linux-kernel On Sat, Dec 13, 2003 at 03:25:52AM +1100, Keith Owens wrote: > On Fri, 12 Dec 2003 15:44:01 +0000, > Dave Jones <davej@redhat.com> wrote: > > Might be worth mentioning in the Per-CPU data section that code doing > >operations on CPU registers (MSRs and the like) needs to be protected > >by an explicit preempt_disable() / preempt_enable() pair if it's doing > >operations that it expects to run on a specific CPU. > > Also calls to smp_call_function() need to be wrapped in preempt_disable, > plus any work that is done on the current cpu before/after calling a > function on the other cpus. Lack of preempt disable could result in > the operation being done twice on one cpu and not at all on another. And where you want to do the same thing on every processor, there's a handy on_each_cpu() which takes care of this for you. Dave ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 18:25 ` Dave Jones @ 2003-12-13 0:28 ` Keith Owens 0 siblings, 0 replies; 14+ messages in thread From: Keith Owens @ 2003-12-13 0:28 UTC (permalink / raw) To: Dave Jones; +Cc: Rusty Russell, linux-kernel On Fri, 12 Dec 2003 18:25:56 +0000, Dave Jones <davej@redhat.com> wrote: >On Sat, Dec 13, 2003 at 03:25:52AM +1100, Keith Owens wrote: > > Also calls to smp_call_function() need to be wrapped in preempt_disable, > > plus any work that is done on the current cpu before/after calling a > > function on the other cpus. Lack of preempt disable could result in > > the operation being done twice on one cpu and not at all on another. > >And where you want to do the same thing on every processor, there's a >handy on_each_cpu() which takes care of this for you. I know, but there is code in 2.6 that calls smp_call_function directly, without preempt disable around it. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 15:44 ` Dave Jones 2003-12-12 16:25 ` Keith Owens @ 2003-12-12 21:05 ` Rob Love 2003-12-15 2:28 ` Rusty Russell 2 siblings, 0 replies; 14+ messages in thread From: Rob Love @ 2003-12-12 21:05 UTC (permalink / raw) To: Dave Jones; +Cc: Rusty Russell, linux-kernel, Paul E. McKenney On Fri, 2003-12-12 at 10:44, Dave Jones wrote: > Might be worth mentioning in the Per-CPU data section that code doing > operations on CPU registers (MSRs and the like) needs to be protected > by an explicit preempt_disable() / preempt_enable() pair if it's doing > operations that it expects to run on a specific CPU. > > For examples, see arch/i386/kernel/msr.c & cpuid.c Good point. I think this can be generalized to "you must remain atomic so long as you expect the processor state to remain consistent." For example, while manipulating processor registers or modes. This means that you must disable kernel preemption and must not sleep within the critical region. Rob Love ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 15:44 ` Dave Jones 2003-12-12 16:25 ` Keith Owens 2003-12-12 21:05 ` Rob Love @ 2003-12-15 2:28 ` Rusty Russell 2 siblings, 0 replies; 14+ messages in thread From: Rusty Russell @ 2003-12-15 2:28 UTC (permalink / raw) To: Dave Jones; +Cc: linux-kernel, Keith Owens, Rob Love In message <20031212154401.GA10584@redhat.com> you write: > On Fri, Dec 12, 2003 at 04:24:18PM +1100, Rusty Russell wrote: > > OK, I've put the html version up for your reading pleasure: the diff > > is quite extensive and hard to read. > > > > http://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/ > > > > Feedback welcome, > > Hi Rusty, > Might be worth mentioning in the Per-CPU data section that code doing > operations on CPU registers (MSRs and the like) needs to be protected > by an explicit preempt_disable() / preempt_enable() pair if it's doing > operations that it expects to run on a specific CPU. > > For examples, see arch/i386/kernel/msr.c & cpuid.c I don't think it belongs in per-cpu data, that's a bit disingenous. It's a separate section by itself, I think. But it's also fairly rare. The smp_call_function() case is more subtle, but in fact, there are only two calls to smp_call_function in non-arch-specific code, and both are wrong: mm/slab.c: smp_call_function_all_cpus() should just be on_each_cpu. net/core/flow.c: This should also be on_each_cpu: I actually have a patch for this, too. The aim of this document is to give the reader an overview core techniques, not describe every possible variant. IMHO a more likely candidate for a section would be atomic_dec_and_lock(), which there is no real concept of an "owner" of an object, but the destroy is implicit (and atomic) by the last user. Hope that clarifies, Rusty. -- Anyone who quotes me in their sig is an idiot. -- Rusty Russell. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 5:24 Rusty Russell 2003-12-12 15:44 ` Dave Jones @ 2003-12-12 19:35 ` Paul E. McKenney 2003-12-13 3:16 ` Zwane Mwaikambo 2003-12-15 5:17 ` Rusty Russell 1 sibling, 2 replies; 14+ messages in thread From: Paul E. McKenney @ 2003-12-12 19:35 UTC (permalink / raw) To: Rusty Russell; +Cc: linux-kernel On Fri, Dec 12, 2003 at 04:24:18PM +1100, Rusty Russell wrote: > OK, I've put the html version up for your reading pleasure: the diff > is quite extensive and hard to read. > > http://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/ > > Feedback welcome, > Rusty. > -- > Anyone who quotes me in their sig is an idiot. -- Rusty Russell. Hello, Rusty, Good stuff! A few comments, as always! Thanx, Paul Glossary: o Hardware Interrupt / Hardware IRQ: How does in_irq() know that interrupts have been blocked? The local_irq_disable() does not seem to mess with the counter, and preempt_disable() just does the standard inc/dec stuff... o in_irq() is hardirq_count(). o hardirq_count() is (preempt_count() & HARDIRQ_MASK). o preempt_count is an integer, HARDIRQ_MASK is a constant that is out of the normal inc/dec range. I see how an interrupt handler causes in_irq() to do its thing via the irq_enter() and irq_exit() macros, but I don't see how masking interrupts makes this happen. Probably just my confusion, but... Ditto for "in_interrupt()". That would be for both the analysis and the probable confusion on my part. o Software Interrupt / softirq: formatting botch "of'software". This would be "o'software", right? o Preemption: Would it be worth changing the first bit of the second sentence to read something like: "In 2.5.4 and later, when CONFIG_PREEMPT is set, this changes:"? Overzealous Prevention Of Deadlocks: Cute!!! Avoiding Locks: Read Copy Update o Might be worth noting explicitly early on that updaters are running concurrently with readers. Should be obvious given that the readers aren't doing any explicit synchronization, but I have run into confusion on this point surprisingly often. o Please add a note to the list_for_each_entry_rcu() description saying that writers (who hold appropriate locks) need not use the _rcu() variant. o Don't understand the blank line before and after the "struct rcu_head rcu;", but clearly doesn't affect functionality. ;-) o If nothing blocks between the call to __cache_find() and the eventual object_put(), it is worthwhile to avoid the reference-count manipulation. This would make all of cache_find() be almost as fast as UP, rather than just __cache_find(). ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 19:35 ` Paul E. McKenney @ 2003-12-13 3:16 ` Zwane Mwaikambo 2003-12-15 5:17 ` Rusty Russell 2003-12-15 5:17 ` Rusty Russell 1 sibling, 1 reply; 14+ messages in thread From: Zwane Mwaikambo @ 2003-12-13 3:16 UTC (permalink / raw) To: Paul E. McKenney; +Cc: Rusty Russell, Linux Kernel A few comments on top of Paul's and another seperate one; On Fri, 12 Dec 2003, Paul E. McKenney wrote: > o Hardware Interrupt / Hardware IRQ: How does in_irq() > know that interrupts have been blocked? The local_irq_disable() > does not seem to mess with the counter, and preempt_disable() > just does the standard inc/dec stuff... > > o in_irq() is hardirq_count(). > o hardirq_count() is (preempt_count() & HARDIRQ_MASK). > o preempt_count is an integer, HARDIRQ_MASK is a constant that > is out of the normal inc/dec range. > > I see how an interrupt handler causes in_irq() to do its thing > via the irq_enter() and irq_exit() macros, but I don't see how > masking interrupts makes this happen. > > Probably just my confusion, but... > > Ditto for "in_interrupt()". That would be for both the > analysis and the probable confusion on my part. I'd have to agree, it would require OR'ing with irqs_disabled(). And another addendum would be that in_interrupt() also returns true when in a softirq handler. > > o Software Interrupt / softirq: formatting botch "of'software". > This would be "o'software", right? > > o Preemption: Would it be worth changing the first bit > of the second sentence to read something like: "In 2.5.4 > and later, when CONFIG_PREEMPT is set, this changes:"? I think he could also add that local_irq_disable() also disables preemption implicitely. User Context The kernel executing on behalf of a particular process (ie. a system call or trap) or kernel thread. You can tell which process with the current() macro.) Not to be confused with userspace. Can be interrupted by software or hardware interrupts. Small nit, it's 'current' ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-13 3:16 ` Zwane Mwaikambo @ 2003-12-15 5:17 ` Rusty Russell 0 siblings, 0 replies; 14+ messages in thread From: Rusty Russell @ 2003-12-15 5:17 UTC (permalink / raw) To: Zwane Mwaikambo; +Cc: Paul E. McKenney, Linux Kernel In message <Pine.LNX.4.58.0312122203090.23752@montezuma.fsmlabs.com> you write: > I think he could also add that local_irq_disable() also disables > preemption implicitely. True, but I don't actually mention preemption very much at all, except in RCU. That belongs more in the Unreliable Guide To Kernel Hacking, IMHO. Revising which is next weeks' job. > Small nit, it's 'current' Thanks fixed! Rusty. -- Anyone who quotes me in their sig is an idiot. -- Rusty Russell. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-12 19:35 ` Paul E. McKenney 2003-12-13 3:16 ` Zwane Mwaikambo @ 2003-12-15 5:17 ` Rusty Russell 2003-12-15 22:22 ` Paul E. McKenney 1 sibling, 1 reply; 14+ messages in thread From: Rusty Russell @ 2003-12-15 5:17 UTC (permalink / raw) To: paulmck; +Cc: linux-kernel In message <20031212193559.GA1614@us.ibm.com> you write: > On Fri, Dec 12, 2003 at 04:24:18PM +1100, Rusty Russell wrote: > > OK, I've put the html version up for your reading pleasure: the diff > > is quite extensive and hard to read. > > > > http://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/ > > > > Feedback welcome, > > Rusty. > > -- > > Anyone who quotes me in their sig is an idiot. -- Rusty Russell. > > Hello, Rusty, > > Good stuff! A few comments, as always! > > Thanx, Paul > > Glossary: > > o Hardware Interrupt / Hardware IRQ: How does in_irq() > know that interrupts have been blocked? The local_irq_disable() > does not seem to mess with the counter, and preempt_disable() > just does the standard inc/dec stuff... You're right, it doesn't. > o in_irq() is hardirq_count(). > o hardirq_count() is (preempt_count() & HARDIRQ_MASK). > o preempt_count is an integer, HARDIRQ_MASK is a constant that > is out of the normal inc/dec range. > > I see how an interrupt handler causes in_irq() to do its thing > via the irq_enter() and irq_exit() macros, but I don't see how > masking interrupts makes this happen. > > Probably just my confusion, but... > > Ditto for "in_interrupt()". That would be for both the > analysis and the probable confusion on my part. Yes. I've removed both those: AFAICT they were never true. > o Software Interrupt / softirq: formatting botch "of'software". > This would be "o'software", right? Looks ok here: Tasklets and softirqs both fall into the category of 'software interrupts'. > o Preemption: Would it be worth changing the first bit > of the second sentence to read something like: "In 2.5.4 > and later, when CONFIG_PREEMPT is set, this changes:"? I was trying to make it a little future-proof: I think CONFIG_PREEMPT should go away some day. > Overzealous Prevention Of Deadlocks: Cute!!! This is untouched from the old version of the document. I had a troubled youth... > Avoiding Locks: Read Copy Update > > o Might be worth noting explicitly early on that updaters are > running concurrently with readers. Should be obvious given > that the readers aren't doing any explicit synchronization, > but I have run into confusion on this point surprisingly often. OK. Changed the second paragraph from: How do we get rid of read locks? That is actually quite simple: to: How do we get rid of read locks? Getting rid of read locks means that writers may be changing the list underneath the readers. That is actually quite simple: > o Please add a note to the list_for_each_entry_rcu() description > saying that writers (who hold appropriate locks) need not use > the _rcu() variant. OK: Once again, there is a <function>list_for_each_entry_rcu()</function> (<filename>include/linux/list.h</filename>) to help you. Of course, writers can just use <function>list_for_each_entry()</function>, since there cannot be two simultaneous writers. > o Don't understand the blank line before and after the > "struct rcu_head rcu;", but clearly doesn't affect > functionality. ;-) Hmm, it would logically be at the start of the structure. I think I wanted to avoid associating it with the refcnt_t. > o If nothing blocks between the call to __cache_find() and the > eventual object_put(), it is worthwhile to avoid the > reference-count manipulation. This would make all of > cache_find() be almost as fast as UP, rather than just > __cache_find(). Good point. Text added at the bottom of that section: <para> There is a furthur optimization possible here: remember our original cache code, where there were no reference counts and the caller simply held the lock whenever using the object? This is still possible: if you hold the lock, noone can delete the object, so you don't need to get and put the reference count. </para> <para> Now, because the 'read lock' in RCU is simply disabling preemption, a caller which always preemption disabled between calling <function>cache_find()</function> and <function>object_put()</function> does not need to actually get and put the reference count: we could expose <function>__cache_find()</function> by making it non-static, and such callers could simply call that. </para> <para> The benefit here is that the reference count is not written to: the object is not altered in any way, which is much faster on SMP machines due to caching. </para> I've uploaded a new draft with these and other fixes... Thanks! Rusty. -- Anyone who quotes me in their sig is an idiot. -- Rusty Russell. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-15 5:17 ` Rusty Russell @ 2003-12-15 22:22 ` Paul E. McKenney 2003-12-16 6:32 ` Rusty Russell 0 siblings, 1 reply; 14+ messages in thread From: Paul E. McKenney @ 2003-12-15 22:22 UTC (permalink / raw) To: Rusty Russell; +Cc: linux-kernel On Mon, Dec 15, 2003 at 04:17:47PM +1100, Rusty Russell wrote: > In message <20031212193559.GA1614@us.ibm.com> you write: > > > o Software Interrupt / softirq: formatting botch "of'software". > > This would be "o'software", right? > > Looks ok here: > Tasklets and softirqs both fall into the category of 'software interrupts'. It was late and my eyes were tired. Can't even blame it on my browser!!! > > o Preemption: Would it be worth changing the first bit > > of the second sentence to read something like: "In 2.5.4 > > and later, when CONFIG_PREEMPT is set, this changes:"? > > I was trying to make it a little future-proof: I think CONFIG_PREEMPT > should go away some day. I hear you! But I suspect that it may take quite some time for that day to arrive. > > Overzealous Prevention Of Deadlocks: Cute!!! > > This is untouched from the old version of the document. I had a > troubled youth... ;-) Well, don't wring -all- of the fun out of the document! > > Avoiding Locks: Read Copy Update > > > > o Might be worth noting explicitly early on that updaters are > > running concurrently with readers. Should be obvious given > > that the readers aren't doing any explicit synchronization, > > but I have run into confusion on this point surprisingly often. > > OK. Changed the second paragraph from: > > How do we get rid of read locks? That is actually quite simple: > > to: > > How do we get rid of read locks? Getting rid of read locks > means that writers may be changing the list underneath the readers. > That is actually quite simple: Looks good! Upon rereading... Does "wmb()" want to be "smp_wmb()"? > > o Please add a note to the list_for_each_entry_rcu() description > > saying that writers (who hold appropriate locks) need not use > > the _rcu() variant. > > OK: > > Once again, there is a > <function>list_for_each_entry_rcu()</function> > (<filename>include/linux/list.h</filename>) to help you. Of > course, writers can just use > <function>list_for_each_entry()</function>, since there cannot > be two simultaneous writers. Also looks good! Again, upon rereading, "read Read Copy Update code" probably wants to be "real Read Copy Update code". I moused it this time, given my past record with eyeballing. ;-) > > o If nothing blocks between the call to __cache_find() and the > > eventual object_put(), it is worthwhile to avoid the > > reference-count manipulation. This would make all of > > cache_find() be almost as fast as UP, rather than just > > __cache_find(). > > Good point. Text added at the bottom of that section: > > <para> > There is a furthur optimization possible here: remember our original > cache code, where there were no reference counts and the caller simply > held the lock whenever using the object? This is still possible: if > you hold the lock, noone can delete the object, so you don't need to > get and put the reference count. > </para> > > <para> > Now, because the 'read lock' in RCU is simply disabling preemption, a > caller which always preemption disabled between calling disables preemption > <function>cache_find()</function> and > <function>object_put()</function> does not need to actually get and > put the reference count: we could expose > <function>__cache_find()</function> by making it non-static, and > such callers could simply call that. > </para> > <para> > The benefit here is that the reference count is not written to: the > object is not altered in any way, which is much faster on SMP > machines due to caching. > </para> Other than the grammar nit above, looks good! > I've uploaded a new draft with these and other fixes... Good stuff, thank you!!! Thanx, Paul ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [DOCUMENTATION] Revised Unreliable Kernel Locking Guide 2003-12-15 22:22 ` Paul E. McKenney @ 2003-12-16 6:32 ` Rusty Russell 0 siblings, 0 replies; 14+ messages in thread From: Rusty Russell @ 2003-12-16 6:32 UTC (permalink / raw) To: paulmck; +Cc: linux-kernel In message <20031215222213.GA1270@us.ibm.com> you write: > > How do we get rid of read locks? Getting rid of read locks > > means that writers may be changing the list underneath the readers. > > That is actually quite simple: > > Looks good! Upon rereading... Does "wmb()" want to be "smp_wmb()"? Yes, but I didn't want to turn this into a document on memory barriers: you'll note that I almost avoided it entirely. > Again, upon rereading, "read Read Copy Update code" probably wants to > be "real Read Copy Update code". I moused it this time, given > my past record with eyeballing. ;-) Fixed. > > Now, because the 'read lock' in RCU is simply disabling preemption, a > > caller which always preemption disabled between calling > disables preemption Ah, I inserted a 'has' as well (a caller which always has preemption disabled...). The implication that the caller probably has preempt disabled as a side effect of being in an interrupt or holding a spinlock. > > I've uploaded a new draft with these and other fixes... > > Good stuff, thank you!!! Hey, thanks for the review! Cheers, Rusty. -- Anyone who quotes me in their sig is an idiot. -- Rusty Russell. ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2003-12-16 23:23 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2003-12-13 3:15 [DOCUMENTATION] Revised Unreliable Kernel Locking Guide Manfred Spraul -- strict thread matches above, loose matches on Subject: below -- 2003-12-12 5:24 Rusty Russell 2003-12-12 15:44 ` Dave Jones 2003-12-12 16:25 ` Keith Owens 2003-12-12 18:25 ` Dave Jones 2003-12-13 0:28 ` Keith Owens 2003-12-12 21:05 ` Rob Love 2003-12-15 2:28 ` Rusty Russell 2003-12-12 19:35 ` Paul E. McKenney 2003-12-13 3:16 ` Zwane Mwaikambo 2003-12-15 5:17 ` Rusty Russell 2003-12-15 5:17 ` Rusty Russell 2003-12-15 22:22 ` Paul E. McKenney 2003-12-16 6:32 ` Rusty Russell
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).