* [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation @ 2012-02-06 13:55 Jan Kara 2012-02-06 15:42 ` Srivatsa S. Bhat 0 siblings, 1 reply; 10+ messages in thread From: Jan Kara @ 2012-02-06 13:55 UTC (permalink / raw) To: linux-fsdevel Cc: LKML, hare, Andrew Morton, Al Viro, Christoph Hellwig, Jan Kara When discovery of lots of disks happen in parallel, we call invalidate_bh_lrus() once for each disk from partitioning code resulting in a storm of IPIs and causing a softlockup detection to fire (it takes several *minutes* for a machine to execute all the invalidate_bh_lrus() calls). Fix the issue by allowing only single invalidation to run using a mutex and let waiters for mutex figure out whether someone invalidated LRUs for them while they were waiting. Signed-off-by: Jan Kara <jack@suse.cz> --- fs/buffer.c | 23 ++++++++++++++++++++++- 1 files changed, 22 insertions(+), 1 deletions(-) I feel this is slightly hacky approach but it works. If someone has better idea, please speak up. diff --git a/fs/buffer.c b/fs/buffer.c index 1a30db7..56b0d2b 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1384,10 +1384,31 @@ static void invalidate_bh_lru(void *arg) } put_cpu_var(bh_lrus); } - + +/* + * Invalidate all buffers in LRUs. Since we have to signal all CPUs to + * invalidate their per-cpu local LRU lists this is rather expensive operation. + * So we optimize the case of several parallel calls to invalidate_bh_lrus() + * which happens from partitioning code when lots of disks appear in the + * system during boot. + */ void invalidate_bh_lrus(void) { + static DEFINE_MUTEX(bh_invalidate_mutex); + static long bh_invalidate_sequence; + + long my_bh_invalidate_sequence = bh_invalidate_sequence; + + mutex_lock(&bh_invalidate_mutex); + /* Someone did bh invalidation while we were sleeping? */ + if (my_bh_invalidate_sequence != bh_invalidate_sequence) + goto out; + bh_invalidate_sequence++; + /* Inc of bh_invalidate_sequence must happen before we invalidate bhs */ + smp_wmb(); on_each_cpu(invalidate_bh_lru, NULL, 1); +out: + mutex_unlock(&bh_invalidate_mutex); } EXPORT_SYMBOL_GPL(invalidate_bh_lrus); -- 1.7.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-06 13:55 [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation Jan Kara @ 2012-02-06 15:42 ` Srivatsa S. Bhat 2012-02-06 15:51 ` Hannes Reinecke 2012-02-06 16:47 ` Jan Kara 0 siblings, 2 replies; 10+ messages in thread From: Srivatsa S. Bhat @ 2012-02-06 15:42 UTC (permalink / raw) To: Jan Kara Cc: linux-fsdevel, LKML, hare, Andrew Morton, Al Viro, Christoph Hellwig, Gilad Ben-Yossef On 02/06/2012 07:25 PM, Jan Kara wrote: > When discovery of lots of disks happen in parallel, we call > invalidate_bh_lrus() once for each disk from partitioning code resulting in a > storm of IPIs and causing a softlockup detection to fire (it takes several > *minutes* for a machine to execute all the invalidate_bh_lrus() calls). > > Fix the issue by allowing only single invalidation to run using a mutex and let > waiters for mutex figure out whether someone invalidated LRUs for them while > they were waiting. > > Signed-off-by: Jan Kara <jack@suse.cz> > --- > fs/buffer.c | 23 ++++++++++++++++++++++- > 1 files changed, 22 insertions(+), 1 deletions(-) > > I feel this is slightly hacky approach but it works. If someone has better > idea, please speak up. > Something related that you might be interested in: https://lkml.org/lkml/2012/2/5/109 (This is part of Gilad's patchset that tries to reduce cross-CPU IPI interference.) Regards, Srivatsa S. Bhat > diff --git a/fs/buffer.c b/fs/buffer.c > index 1a30db7..56b0d2b 100644 > --- a/fs/buffer.c > +++ b/fs/buffer.c > @@ -1384,10 +1384,31 @@ static void invalidate_bh_lru(void *arg) > } > put_cpu_var(bh_lrus); > } > - > + > +/* > + * Invalidate all buffers in LRUs. Since we have to signal all CPUs to > + * invalidate their per-cpu local LRU lists this is rather expensive operation. > + * So we optimize the case of several parallel calls to invalidate_bh_lrus() > + * which happens from partitioning code when lots of disks appear in the > + * system during boot. > + */ > void invalidate_bh_lrus(void) > { > + static DEFINE_MUTEX(bh_invalidate_mutex); > + static long bh_invalidate_sequence; > + > + long my_bh_invalidate_sequence = bh_invalidate_sequence; > + > + mutex_lock(&bh_invalidate_mutex); > + /* Someone did bh invalidation while we were sleeping? */ > + if (my_bh_invalidate_sequence != bh_invalidate_sequence) > + goto out; > + bh_invalidate_sequence++; > + /* Inc of bh_invalidate_sequence must happen before we invalidate bhs */ > + smp_wmb(); > on_each_cpu(invalidate_bh_lru, NULL, 1); > +out: > + mutex_unlock(&bh_invalidate_mutex); > } > EXPORT_SYMBOL_GPL(invalidate_bh_lrus); > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-06 15:42 ` Srivatsa S. Bhat @ 2012-02-06 15:51 ` Hannes Reinecke 2012-02-06 16:47 ` Jan Kara 1 sibling, 0 replies; 10+ messages in thread From: Hannes Reinecke @ 2012-02-06 15:51 UTC (permalink / raw) To: Srivatsa S. Bhat Cc: Jan Kara, linux-fsdevel, LKML, Andrew Morton, Al Viro, Christoph Hellwig, Gilad Ben-Yossef On 02/06/2012 04:42 PM, Srivatsa S. Bhat wrote: > On 02/06/2012 07:25 PM, Jan Kara wrote: > >> When discovery of lots of disks happen in parallel, we call >> invalidate_bh_lrus() once for each disk from partitioning code resulting in a >> storm of IPIs and causing a softlockup detection to fire (it takes several >> *minutes* for a machine to execute all the invalidate_bh_lrus() calls). >> >> Fix the issue by allowing only single invalidation to run using a mutex and let >> waiters for mutex figure out whether someone invalidated LRUs for them while >> they were waiting. >> >> Signed-off-by: Jan Kara <jack@suse.cz> >> --- >> fs/buffer.c | 23 ++++++++++++++++++++++- >> 1 files changed, 22 insertions(+), 1 deletions(-) >> >> I feel this is slightly hacky approach but it works. If someone has better >> idea, please speak up. >> > > > Something related that you might be interested in: > https://lkml.org/lkml/2012/2/5/109 > > (This is part of Gilad's patchset that tries to reduce cross-CPU IPI > interference.) > Yes, but this is only part of the equation. When booting a machine with lots of disks chances are that each CPU _will_ have LRU BHs attached to it (due to partitions table reading). However, these LRU BHs have nothing to do with the device in question. So we wouldn't even need to send IPIs here. Sadly we seem to lack the facilities to figure that out (I'm not an expert in that area to tell for sure :-). So the best we can hope for is to serialise the IPIs to not overload system with tons of IPIs. Cheers, Hannes -- Dr. Hannes Reinecke zSeries & Storage hare@suse.de +49 911 74053 688 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg) ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-06 15:42 ` Srivatsa S. Bhat 2012-02-06 15:51 ` Hannes Reinecke @ 2012-02-06 16:47 ` Jan Kara 2012-02-06 21:17 ` Andrew Morton 1 sibling, 1 reply; 10+ messages in thread From: Jan Kara @ 2012-02-06 16:47 UTC (permalink / raw) To: Srivatsa S. Bhat Cc: Jan Kara, linux-fsdevel, LKML, hare, Andrew Morton, Al Viro, Christoph Hellwig, Gilad Ben-Yossef On Mon 06-02-12 21:12:36, Srivatsa S. Bhat wrote: > On 02/06/2012 07:25 PM, Jan Kara wrote: > > > When discovery of lots of disks happen in parallel, we call > > invalidate_bh_lrus() once for each disk from partitioning code resulting in a > > storm of IPIs and causing a softlockup detection to fire (it takes several > > *minutes* for a machine to execute all the invalidate_bh_lrus() calls). > > > > Fix the issue by allowing only single invalidation to run using a mutex and let > > waiters for mutex figure out whether someone invalidated LRUs for them while > > they were waiting. > > > > Signed-off-by: Jan Kara <jack@suse.cz> > > --- > > fs/buffer.c | 23 ++++++++++++++++++++++- > > 1 files changed, 22 insertions(+), 1 deletions(-) > > > > I feel this is slightly hacky approach but it works. If someone has better > > idea, please speak up. > > > > > Something related that you might be interested in: > https://lkml.org/lkml/2012/2/5/109 > > (This is part of Gilad's patchset that tries to reduce cross-CPU IPI > interference.) Thanks for the pointer. I didn't know about it. As Hannes wrote, this need not be enough for our use case as there might indeed be some bhs in the LRU. But I'd be interested how well the patchset works anyway. Maybe it would be enough because after all when we invalidate LRUs subsequent callers will see them empty and not issue IPI? Hannes, can you give a try to the patches? Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-06 16:47 ` Jan Kara @ 2012-02-06 21:17 ` Andrew Morton 2012-02-06 22:25 ` Jan Kara 0 siblings, 1 reply; 10+ messages in thread From: Andrew Morton @ 2012-02-06 21:17 UTC (permalink / raw) To: Jan Kara Cc: Srivatsa S. Bhat, linux-fsdevel, LKML, hare, Al Viro, Christoph Hellwig, Gilad Ben-Yossef On Mon, 6 Feb 2012 17:47:32 +0100 Jan Kara <jack@suse.cz> wrote: > On Mon 06-02-12 21:12:36, Srivatsa S. Bhat wrote: > > On 02/06/2012 07:25 PM, Jan Kara wrote: > > > > > When discovery of lots of disks happen in parallel, we call > > > invalidate_bh_lrus() once for each disk from partitioning code resulting in a > > > storm of IPIs and causing a softlockup detection to fire (it takes several > > > *minutes* for a machine to execute all the invalidate_bh_lrus() calls). Gad. How many disks are we talking about here? > > > Fix the issue by allowing only single invalidation to run using a mutex and let > > > waiters for mutex figure out whether someone invalidated LRUs for them while > > > they were waiting. > > > > > > Signed-off-by: Jan Kara <jack@suse.cz> > > > --- > > > fs/buffer.c | 23 ++++++++++++++++++++++- > > > 1 files changed, 22 insertions(+), 1 deletions(-) > > > > > > I feel this is slightly hacky approach but it works. If someone has better > > > idea, please speak up. > > > > > > > > > Something related that you might be interested in: > > https://lkml.org/lkml/2012/2/5/109 > > > > (This is part of Gilad's patchset that tries to reduce cross-CPU IPI > > interference.) > Thanks for the pointer. I didn't know about it. As Hannes wrote, this > need not be enough for our use case as there might indeed be some bhs in > the LRU. But I'd be interested how well the patchset works anyway. Maybe it > would be enough because after all when we invalidate LRUs subsequent > callers will see them empty and not issue IPI? Hannes, can you give a try > to the patches? If that doesn't work then an option to think about is to have a bool to disable the bh LRU code. That would add a test-n-branch to __find_get_block(), which wouldn't kill us. Arrange for the LRU code to be disabled during device probing. Or just leave the LRU disabled until very late in boot, perhaps. Also, I'm wondering why we call invalidate_bh_lrus() at all during partition reading. Presumably it's where we're shooting down the blockdev pagecache (you didn't tell us and I'm too lazy to hunt it down). But do we really need to drop the pagecache at whatever-this-callsite-is? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-06 21:17 ` Andrew Morton @ 2012-02-06 22:25 ` Jan Kara 2012-02-07 16:25 ` Gilad Ben-Yossef 0 siblings, 1 reply; 10+ messages in thread From: Jan Kara @ 2012-02-06 22:25 UTC (permalink / raw) To: Andrew Morton Cc: Jan Kara, Srivatsa S. Bhat, linux-fsdevel, LKML, hare, Al Viro, Christoph Hellwig, Gilad Ben-Yossef On Mon 06-02-12 13:17:17, Andrew Morton wrote: > On Mon, 6 Feb 2012 17:47:32 +0100 > Jan Kara <jack@suse.cz> wrote: > > > On Mon 06-02-12 21:12:36, Srivatsa S. Bhat wrote: > > > On 02/06/2012 07:25 PM, Jan Kara wrote: > > > > > > > When discovery of lots of disks happen in parallel, we call > > > > invalidate_bh_lrus() once for each disk from partitioning code resulting in a > > > > storm of IPIs and causing a softlockup detection to fire (it takes several > > > > *minutes* for a machine to execute all the invalidate_bh_lrus() calls). > > Gad. How many disks are we talking about here? I think something around hundred scsi disks in this case (number of physical drives is actually lower but multipathing blows it up). I actually saw machines with close to thousand scsi disks (yes, they had names like sdabc ;). > > > > Fix the issue by allowing only single invalidation to run using a mutex and let > > > > waiters for mutex figure out whether someone invalidated LRUs for them while > > > > they were waiting. > > > > > > > > Signed-off-by: Jan Kara <jack@suse.cz> > > > > --- > > > > fs/buffer.c | 23 ++++++++++++++++++++++- > > > > 1 files changed, 22 insertions(+), 1 deletions(-) > > > > > > > > I feel this is slightly hacky approach but it works. If someone has better > > > > idea, please speak up. > > > > > > > > > > > > > Something related that you might be interested in: > > > https://lkml.org/lkml/2012/2/5/109 > > > > > > (This is part of Gilad's patchset that tries to reduce cross-CPU IPI > > > interference.) > > Thanks for the pointer. I didn't know about it. As Hannes wrote, this > > need not be enough for our use case as there might indeed be some bhs in > > the LRU. But I'd be interested how well the patchset works anyway. Maybe it > > would be enough because after all when we invalidate LRUs subsequent > > callers will see them empty and not issue IPI? Hannes, can you give a try > > to the patches? > > If that doesn't work then an option to think about is to have a bool to > disable the bh LRU code. That would add a test-n-branch to > __find_get_block(), which wouldn't kill us. Arrange for the LRU code > to be disabled during device probing. Or just leave the LRU disabled > until very late in boot, perhaps. > > Also, I'm wondering why we call invalidate_bh_lrus() at all during > partition reading. Presumably it's where we're shooting down the > blockdev pagecache (you didn't tell us and I'm too lazy to hunt it > down). But do we really need to drop the pagecache at > whatever-this-callsite-is? block/genhd.c has in register_disk(): ... bdev = bdget_disk(disk, 0); if (!bdev) goto exit; bdev->bd_invalidated = 1; err = blkdev_get(bdev, FMODE_READ, NULL); if (err < 0) goto exit; blkdev_put(bdev, FMODE_READ); ... And in blkdev_put() (actually __blkdev_put()) bd_openers drops to 0 so we call kill_bdev() which calls invalidate_bh_lrus(). So yes, we are unnecessarily eager to flush things there but I'm not sure if I see a cleaner solution. Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-06 22:25 ` Jan Kara @ 2012-02-07 16:25 ` Gilad Ben-Yossef 2012-02-07 18:29 ` Jan Kara 0 siblings, 1 reply; 10+ messages in thread From: Gilad Ben-Yossef @ 2012-02-07 16:25 UTC (permalink / raw) To: Jan Kara Cc: Andrew Morton, Srivatsa S. Bhat, linux-fsdevel, LKML, hare, Al Viro, Christoph Hellwig On Tue, Feb 7, 2012 at 12:25 AM, Jan Kara <jack@suse.cz> wrote: > On Mon 06-02-12 13:17:17, Andrew Morton wrote: >> On Mon, 6 Feb 2012 17:47:32 +0100 >> Jan Kara <jack@suse.cz> wrote: >> >> > On Mon 06-02-12 21:12:36, Srivatsa S. Bhat wrote: >> > > On 02/06/2012 07:25 PM, Jan Kara wrote: >> > > >> > > > When discovery of lots of disks happen in parallel, we call >> > > > invalidate_bh_lrus() once for each disk from partitioning code resulting in a >> > > > storm of IPIs and causing a softlockup detection to fire (it takes several >> > > > *minutes* for a machine to execute all the invalidate_bh_lrus() calls). >> >> Gad. How many disks are we talking about here? > I think something around hundred scsi disks in this case (number of > physical drives is actually lower but multipathing blows it up). I actually > saw machines with close to thousand scsi disks (yes, they had names like > sdabc ;). LOL. Is that a huge SCSI disk array in your server or your are just happy to see me... ? :-) > ... >> > > >> > > Something related that you might be interested in: >> > > https://lkml.org/lkml/2012/2/5/109 >> > > >> > > (This is part of Gilad's patchset that tries to reduce cross-CPU IPI >> > > interference.) >> > Thanks for the pointer. I didn't know about it. As Hannes wrote, this >> > need not be enough for our use case as there might indeed be some bhs in >> > the LRU. But I'd be interested how well the patchset works anyway. Maybe it >> > would be enough because after all when we invalidate LRUs subsequent >> > callers will see them empty and not issue IPI? Hannes, can you give a try >> > to the patches? I think its worth a shot since the mutex just delays the IPIs instead of canceling them altogether. A somewhat similar issue in the direct reclaim path of the buddy allocator trying to reclaim per cpu pages was causing a massive storm of IPIs during OOM with concurrent work loads and the IPI noise patches mitigate 85% of the IPIs sent just by checking to see if there are any per cpu pages on the CPU you are about to IPI, so maybe the same kind of logic applies here as well. Thanks, Gilad -- Gilad Ben-Yossef Chief Coffee Drinker gilad@benyossef.com Israel Cell: +972-52-8260388 US Cell: +1-973-8260388 http://benyossef.com "If you take a class in large-scale robotics, can you end up in a situation where the homework eats your dog?" -- Jean-Baptiste Queru ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-07 16:25 ` Gilad Ben-Yossef @ 2012-02-07 18:29 ` Jan Kara 2012-02-08 7:09 ` Gilad Ben-Yossef 0 siblings, 1 reply; 10+ messages in thread From: Jan Kara @ 2012-02-07 18:29 UTC (permalink / raw) To: Gilad Ben-Yossef Cc: Jan Kara, Andrew Morton, Srivatsa S. Bhat, linux-fsdevel, LKML, hare, Al Viro, Christoph Hellwig On Tue 07-02-12 18:25:18, Gilad Ben-Yossef wrote: > On Tue, Feb 7, 2012 at 12:25 AM, Jan Kara <jack@suse.cz> wrote: > > On Mon 06-02-12 13:17:17, Andrew Morton wrote: > >> On Mon, 6 Feb 2012 17:47:32 +0100 > >> Jan Kara <jack@suse.cz> wrote: > >> > >> > On Mon 06-02-12 21:12:36, Srivatsa S. Bhat wrote: > >> > > On 02/06/2012 07:25 PM, Jan Kara wrote: > >> > > > >> > > > When discovery of lots of disks happen in parallel, we call > >> > > > invalidate_bh_lrus() once for each disk from partitioning code resulting in a > >> > > > storm of IPIs and causing a softlockup detection to fire (it takes several > >> > > > *minutes* for a machine to execute all the invalidate_bh_lrus() calls). > >> > >> Gad. How many disks are we talking about here? > > I think something around hundred scsi disks in this case (number of > > physical drives is actually lower but multipathing blows it up). I actually > > saw machines with close to thousand scsi disks (yes, they had names like > > sdabc ;). > > LOL. Is that a huge SCSI disk array in your server or your are just > happy to see me... ? :-) > > > ... > >> > > > >> > > Something related that you might be interested in: > >> > > https://lkml.org/lkml/2012/2/5/109 > >> > > > >> > > (This is part of Gilad's patchset that tries to reduce cross-CPU IPI > >> > > interference.) > >> > Thanks for the pointer. I didn't know about it. As Hannes wrote, this > >> > need not be enough for our use case as there might indeed be some bhs in > >> > the LRU. But I'd be interested how well the patchset works anyway. Maybe it > >> > would be enough because after all when we invalidate LRUs subsequent > >> > callers will see them empty and not issue IPI? Hannes, can you give a try > >> > to the patches? > > I think its worth a shot since the mutex just delays the IPIs instead > of canceling them > altogether. Well, mutex will just delay callers but the sequence logic behind the mutex will reduce number of IPIs a lot - all waiters for mutex will be satisfied by a single signalling of all CPUs while previously they would each do the signalling. > A somewhat similar issue in the direct reclaim path of the buddy > allocator trying to reclaim per cpu pages was causing a massive storm of > IPIs during OOM with concurrent work loads and the IPI noise patches > mitigate 85% of the IPIs sent just by checking to see if there are any > per cpu pages on the CPU you are about to IPI, so maybe the same kind of > logic applies here as well. Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation 2012-02-07 18:29 ` Jan Kara @ 2012-02-08 7:09 ` Gilad Ben-Yossef 0 siblings, 0 replies; 10+ messages in thread From: Gilad Ben-Yossef @ 2012-02-08 7:09 UTC (permalink / raw) To: Jan Kara Cc: Andrew Morton, Srivatsa S. Bhat, linux-fsdevel, LKML, hare, Al Viro, Christoph Hellwig On Tue, Feb 7, 2012 at 8:29 PM, Jan Kara <jack@suse.cz> wrote: > On Tue 07-02-12 18:25:18, Gilad Ben-Yossef wrote: >> On Tue, Feb 7, 2012 at 12:25 AM, Jan Kara <jack@suse.cz> wrote: ... >> I think its worth a shot since the mutex just delays the IPIs instead >> of canceling them >> altogether. > Well, mutex will just delay callers but the sequence logic behind the > mutex will reduce number of IPIs a lot - all waiters for mutex will be > satisfied by a single signalling of all CPUs while previously they would > each do the signalling. Oh, you are right. I've missed that part completely. Note to self: never try to read LKML email on your smartphone... Gilad -- Gilad Ben-Yossef Chief Coffee Drinker gilad@benyossef.com Israel Cell: +972-52-8260388 US Cell: +1-973-8260388 http://benyossef.com "If you take a class in large-scale robotics, can you end up in a situation where the homework eats your dog?" -- Jean-Baptiste Queru ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation @ 2012-02-08 7:09 ` Gilad Ben-Yossef 0 siblings, 0 replies; 10+ messages in thread From: Gilad Ben-Yossef @ 2012-02-08 7:09 UTC (permalink / raw) To: Jan Kara Cc: Andrew Morton, Srivatsa S. Bhat, linux-fsdevel, LKML, hare, Al Viro, Christoph Hellwig On Tue, Feb 7, 2012 at 8:29 PM, Jan Kara <jack@suse.cz> wrote: > On Tue 07-02-12 18:25:18, Gilad Ben-Yossef wrote: >> On Tue, Feb 7, 2012 at 12:25 AM, Jan Kara <jack@suse.cz> wrote: ... >> I think its worth a shot since the mutex just delays the IPIs instead >> of canceling them >> altogether. > Well, mutex will just delay callers but the sequence logic behind the > mutex will reduce number of IPIs a lot - all waiters for mutex will be > satisfied by a single signalling of all CPUs while previously they would > each do the signalling. Oh, you are right. I've missed that part completely. Note to self: never try to read LKML email on your smartphone... Gilad -- Gilad Ben-Yossef Chief Coffee Drinker gilad@benyossef.com Israel Cell: +972-52-8260388 US Cell: +1-973-8260388 http://benyossef.com "If you take a class in large-scale robotics, can you end up in a situation where the homework eats your dog?" -- Jean-Baptiste Queru -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2012-02-08 7:09 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2012-02-06 13:55 [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation Jan Kara 2012-02-06 15:42 ` Srivatsa S. Bhat 2012-02-06 15:51 ` Hannes Reinecke 2012-02-06 16:47 ` Jan Kara 2012-02-06 21:17 ` Andrew Morton 2012-02-06 22:25 ` Jan Kara 2012-02-07 16:25 ` Gilad Ben-Yossef 2012-02-07 18:29 ` Jan Kara 2012-02-08 7:09 ` Gilad Ben-Yossef 2012-02-08 7:09 ` Gilad Ben-Yossef
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.