From: Andrew Morton <akpm@linux-foundation.org> To: Chris Metcalf <cmetcalf@tilera.com> Cc: Tejun Heo <tj@kernel.org>, <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>, Thomas Gleixner <tglx@linutronix.de>, Frederic Weisbecker <fweisbec@gmail.com>, Cody P Schafer <cody@linux.vnet.ibm.com> Subject: Re: [PATCH v8] mm: make lru_add_drain_all() selective Date: Wed, 14 Aug 2013 13:44:30 -0700 [thread overview] Message-ID: <20130814134430.50cb8d609643620b00ab3705@linux-foundation.org> (raw) In-Reply-To: <201308142029.r7EKTMRw023404@farm-0002.internal.tilera.com> On Wed, 14 Aug 2013 16:22:18 -0400 Chris Metcalf <cmetcalf@tilera.com> wrote: > This change makes lru_add_drain_all() only selectively interrupt > the cpus that have per-cpu free pages that can be drained. > > This is important in nohz mode where calling mlockall(), for > example, otherwise will interrupt every core unnecessarily. > I think the patch will work, but it's a bit sad to no longer gain the general ability to do schedule_on_some_cpus(). Oh well. > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -247,7 +247,7 @@ extern void activate_page(struct page *); > extern void mark_page_accessed(struct page *); > extern void lru_add_drain(void); > extern void lru_add_drain_cpu(int cpu); > -extern int lru_add_drain_all(void); > +extern void lru_add_drain_all(void); > extern void rotate_reclaimable_page(struct page *page); > extern void deactivate_page(struct page *page); > extern void swap_setup(void); > diff --git a/mm/swap.c b/mm/swap.c > index 4a1d0d2..8d19543 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -405,6 +405,11 @@ static void activate_page_drain(int cpu) > pagevec_lru_move_fn(pvec, __activate_page, NULL); > } > > +static bool need_activate_page_drain(int cpu) > +{ > + return pagevec_count(&per_cpu(activate_page_pvecs, cpu)) != 0; > +} static int need_activate_page_drain(int cpu) { return pagevec_count(&per_cpu(activate_page_pvecs, cpu)); } would be shorter and faster. bool rather sucks that way. It's a performance-vs-niceness thing. I guess one has to look at the call frequency when deciding. > void activate_page(struct page *page) > { > if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { > @@ -422,6 +427,11 @@ static inline void activate_page_drain(int cpu) > { > } > > +static bool need_activate_page_drain(int cpu) > +{ > + return false; > +} > + > void activate_page(struct page *page) > { > struct zone *zone = page_zone(page); > @@ -678,12 +688,36 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) > lru_add_drain(); > } > > -/* > - * Returns 0 for success > - */ > -int lru_add_drain_all(void) > +static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); > + > +void lru_add_drain_all(void) > { > - return schedule_on_each_cpu(lru_add_drain_per_cpu); > + static DEFINE_MUTEX(lock); > + static struct cpumask has_work; > + int cpu; > + > + mutex_lock(&lock); This is a bit scary but I expect it will be OK - later threads will just twiddle thumbs while some other thread does all or most of their work for them. > + get_online_cpus(); > + cpumask_clear(&has_work); > + > + for_each_online_cpu(cpu) { > + struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); > + > + if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) || > + pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) || > + pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) || > + need_activate_page_drain(cpu)) { > + INIT_WORK(work, lru_add_drain_per_cpu); This initialization is only needed once per boot but I don't see a convenient way of doing this. > + schedule_work_on(cpu, work); > + cpumask_set_cpu(cpu, &has_work); > + } > + } > + > + for_each_cpu(cpu, &has_work) for_each_online_cpu()? > + flush_work(&per_cpu(lru_add_drain_work, cpu)); > + > + put_online_cpus(); > + mutex_unlock(&lock); > }
WARNING: multiple messages have this Message-ID (diff)
From: Andrew Morton <akpm@linux-foundation.org> To: Chris Metcalf <cmetcalf@tilera.com> Cc: Tejun Heo <tj@kernel.org>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Thomas Gleixner <tglx@linutronix.de>, Frederic Weisbecker <fweisbec@gmail.com>, Cody P Schafer <cody@linux.vnet.ibm.com> Subject: Re: [PATCH v8] mm: make lru_add_drain_all() selective Date: Wed, 14 Aug 2013 13:44:30 -0700 [thread overview] Message-ID: <20130814134430.50cb8d609643620b00ab3705@linux-foundation.org> (raw) In-Reply-To: <201308142029.r7EKTMRw023404@farm-0002.internal.tilera.com> On Wed, 14 Aug 2013 16:22:18 -0400 Chris Metcalf <cmetcalf@tilera.com> wrote: > This change makes lru_add_drain_all() only selectively interrupt > the cpus that have per-cpu free pages that can be drained. > > This is important in nohz mode where calling mlockall(), for > example, otherwise will interrupt every core unnecessarily. > I think the patch will work, but it's a bit sad to no longer gain the general ability to do schedule_on_some_cpus(). Oh well. > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -247,7 +247,7 @@ extern void activate_page(struct page *); > extern void mark_page_accessed(struct page *); > extern void lru_add_drain(void); > extern void lru_add_drain_cpu(int cpu); > -extern int lru_add_drain_all(void); > +extern void lru_add_drain_all(void); > extern void rotate_reclaimable_page(struct page *page); > extern void deactivate_page(struct page *page); > extern void swap_setup(void); > diff --git a/mm/swap.c b/mm/swap.c > index 4a1d0d2..8d19543 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -405,6 +405,11 @@ static void activate_page_drain(int cpu) > pagevec_lru_move_fn(pvec, __activate_page, NULL); > } > > +static bool need_activate_page_drain(int cpu) > +{ > + return pagevec_count(&per_cpu(activate_page_pvecs, cpu)) != 0; > +} static int need_activate_page_drain(int cpu) { return pagevec_count(&per_cpu(activate_page_pvecs, cpu)); } would be shorter and faster. bool rather sucks that way. It's a performance-vs-niceness thing. I guess one has to look at the call frequency when deciding. > void activate_page(struct page *page) > { > if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { > @@ -422,6 +427,11 @@ static inline void activate_page_drain(int cpu) > { > } > > +static bool need_activate_page_drain(int cpu) > +{ > + return false; > +} > + > void activate_page(struct page *page) > { > struct zone *zone = page_zone(page); > @@ -678,12 +688,36 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) > lru_add_drain(); > } > > -/* > - * Returns 0 for success > - */ > -int lru_add_drain_all(void) > +static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); > + > +void lru_add_drain_all(void) > { > - return schedule_on_each_cpu(lru_add_drain_per_cpu); > + static DEFINE_MUTEX(lock); > + static struct cpumask has_work; > + int cpu; > + > + mutex_lock(&lock); This is a bit scary but I expect it will be OK - later threads will just twiddle thumbs while some other thread does all or most of their work for them. > + get_online_cpus(); > + cpumask_clear(&has_work); > + > + for_each_online_cpu(cpu) { > + struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); > + > + if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) || > + pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) || > + pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) || > + need_activate_page_drain(cpu)) { > + INIT_WORK(work, lru_add_drain_per_cpu); This initialization is only needed once per boot but I don't see a convenient way of doing this. > + schedule_work_on(cpu, work); > + cpumask_set_cpu(cpu, &has_work); > + } > + } > + > + for_each_cpu(cpu, &has_work) for_each_online_cpu()? > + flush_work(&per_cpu(lru_add_drain_work, cpu)); > + > + put_online_cpus(); > + mutex_unlock(&lock); > } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-08-14 20:44 UTC|newest] Thread overview: 104+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-08-06 20:22 [PATCH] mm: make lru_add_drain_all() selective Chris Metcalf 2013-08-06 20:22 ` Chris Metcalf 2013-08-06 20:22 ` [PATCH v2] " Chris Metcalf 2013-08-06 20:22 ` Chris Metcalf 2013-08-07 20:45 ` Tejun Heo 2013-08-07 20:45 ` Tejun Heo 2013-08-07 20:49 ` [PATCH v3 1/2] workqueue: add new schedule_on_cpu_mask() API Chris Metcalf 2013-08-07 20:49 ` Chris Metcalf 2013-08-07 20:52 ` [PATCH v3 2/2] mm: make lru_add_drain_all() selective Chris Metcalf 2013-08-07 20:52 ` Chris Metcalf 2013-08-07 22:48 ` [PATCH v2] " Cody P Schafer 2013-08-07 22:48 ` Cody P Schafer 2013-08-07 20:49 ` [PATCH v4 1/2] workqueue: add new schedule_on_cpu_mask() API Chris Metcalf 2013-08-07 20:49 ` Chris Metcalf 2013-08-09 15:02 ` Tejun Heo 2013-08-09 15:02 ` Tejun Heo 2013-08-09 16:12 ` Chris Metcalf 2013-08-09 16:12 ` Chris Metcalf 2013-08-09 16:30 ` Tejun Heo 2013-08-09 16:30 ` Tejun Heo 2013-08-07 20:49 ` [PATCH v5 " Chris Metcalf 2013-08-07 20:49 ` Chris Metcalf 2013-08-09 17:40 ` Tejun Heo 2013-08-09 17:40 ` Tejun Heo 2013-08-09 17:49 ` [PATCH v6 " Chris Metcalf 2013-08-09 17:49 ` Chris Metcalf 2013-08-09 17:52 ` [PATCH v6 2/2] mm: make lru_add_drain_all() selective Chris Metcalf 2013-08-09 17:52 ` Chris Metcalf 2013-08-07 20:52 ` [PATCH v5 " Chris Metcalf 2013-08-07 20:52 ` Chris Metcalf 2013-08-07 20:52 ` [PATCH v4 " Chris Metcalf 2013-08-07 20:52 ` Chris Metcalf 2013-08-12 21:05 ` Andrew Morton 2013-08-12 21:05 ` Andrew Morton 2013-08-13 1:53 ` Chris Metcalf 2013-08-13 1:53 ` Chris Metcalf 2013-08-13 19:35 ` Andrew Morton 2013-08-13 19:35 ` Andrew Morton 2013-08-13 20:19 ` Tejun Heo 2013-08-13 20:19 ` Tejun Heo 2013-08-13 20:31 ` Andrew Morton 2013-08-13 20:31 ` Andrew Morton 2013-08-13 20:59 ` Chris Metcalf 2013-08-13 20:59 ` Chris Metcalf 2013-08-13 21:13 ` Andrew Morton 2013-08-13 21:13 ` Andrew Morton 2013-08-13 22:13 ` Chris Metcalf 2013-08-13 22:13 ` Chris Metcalf 2013-08-13 22:26 ` Andrew Morton 2013-08-13 22:26 ` Andrew Morton 2013-08-13 23:04 ` Chris Metcalf 2013-08-13 23:04 ` Chris Metcalf 2013-08-13 22:51 ` [PATCH v7 1/2] workqueue: add schedule_on_each_cpu_cond Chris Metcalf 2013-08-13 22:51 ` Chris Metcalf 2013-08-13 22:53 ` [PATCH v7 2/2] mm: make lru_add_drain_all() selective Chris Metcalf 2013-08-13 22:53 ` Chris Metcalf 2013-08-13 23:29 ` Tejun Heo 2013-08-13 23:29 ` Tejun Heo 2013-08-13 23:32 ` Chris Metcalf 2013-08-13 23:32 ` Chris Metcalf 2013-08-14 6:46 ` Andrew Morton 2013-08-14 6:46 ` Andrew Morton 2013-08-14 13:05 ` Tejun Heo 2013-08-14 13:05 ` Tejun Heo 2013-08-14 16:03 ` Chris Metcalf 2013-08-14 16:03 ` Chris Metcalf 2013-08-14 16:57 ` Tejun Heo 2013-08-14 16:57 ` Tejun Heo 2013-08-14 17:18 ` Chris Metcalf 2013-08-14 17:18 ` Chris Metcalf 2013-08-14 20:07 ` Tejun Heo 2013-08-14 20:07 ` Tejun Heo 2013-08-14 20:22 ` [PATCH v8] " Chris Metcalf 2013-08-14 20:22 ` Chris Metcalf 2013-08-14 20:44 ` Andrew Morton [this message] 2013-08-14 20:44 ` Andrew Morton 2013-08-14 20:50 ` Tejun Heo 2013-08-14 20:50 ` Tejun Heo 2013-08-14 21:03 ` Andrew Morton 2013-08-14 21:03 ` Andrew Morton 2013-08-14 21:07 ` Andrew Morton 2013-08-14 21:07 ` Andrew Morton 2013-08-14 21:12 ` Andrew Morton 2013-08-14 21:12 ` Andrew Morton 2013-08-14 21:23 ` Chris Metcalf 2013-08-14 21:23 ` Chris Metcalf 2013-08-13 23:44 ` [PATCH v7 2/2] " Chris Metcalf 2013-08-13 23:44 ` Chris Metcalf 2013-08-13 23:51 ` Tejun Heo 2013-08-13 23:51 ` Tejun Heo 2013-08-13 21:07 ` [PATCH v4 " Tejun Heo 2013-08-13 21:07 ` Tejun Heo 2013-08-13 21:16 ` Andrew Morton 2013-08-13 21:16 ` Andrew Morton 2013-08-13 22:07 ` Tejun Heo 2013-08-13 22:07 ` Tejun Heo 2013-08-13 22:18 ` Andrew Morton 2013-08-13 22:18 ` Andrew Morton 2013-08-13 22:33 ` Tejun Heo 2013-08-13 22:33 ` Tejun Heo 2013-08-13 22:47 ` Andrew Morton 2013-08-13 22:47 ` Andrew Morton 2013-08-13 23:03 ` Tejun Heo 2013-08-13 23:03 ` Tejun Heo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20130814134430.50cb8d609643620b00ab3705@linux-foundation.org \ --to=akpm@linux-foundation.org \ --cc=cmetcalf@tilera.com \ --cc=cody@linux.vnet.ibm.com \ --cc=fweisbec@gmail.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=tglx@linutronix.de \ --cc=tj@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.