From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: kosaki.motohiro@jp.fujitsu.com,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Mel Gorman <mel@csn.ul.ie>, Shaohua Li <shaohua.li@intel.com>,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH] set_pgdat_percpu_threshold() don't use for_each_online_cpu
Date: Tue, 23 Nov 2010 17:32:33 +0900 (JST) [thread overview]
Message-ID: <20101123172546.7BC5.A69D9226@jp.fujitsu.com> (raw)
In-Reply-To: <20101116160720.5244ea22.akpm@linux-foundation.org>
sorry for the delay.
> Well what's actually happening here? Where is the alleged deadlock?
>
> In the kernel_init() case we have a GFP_KERNEL allocation inside
> get_online_cpus(). In the other case we simply have kswapd calling
> get_online_cpus(), yes?
Yes.
>
> Does lockdep consider all kswapd actions to be "in reclaim context"?
> If so, why?
kswapd call lockdep_set_current_reclaim_state() at thread starting time.
see below.
----------------------------------------------------------------------
static int kswapd(void *p)
{
unsigned long order;
pg_data_t *pgdat = (pg_data_t*)p;
struct task_struct *tsk = current;
struct reclaim_state reclaim_state = {
.reclaimed_slab = 0,
};
const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
lockdep_set_current_reclaim_state(GFP_KERNEL);
......
----------------------------------------------------------------------
> > I think we have two option 1) call lockdep_clear_current_reclaim_state()
> > every time 2) use for_each_possible_cpu instead for_each_online_cpu.
> >
> > Following patch use (2) beucase removing get_online_cpus() makes good
> > side effect. It reduce potentially cpu-hotplug vs memory-shortage deadlock
> > risk.
>
> Well. Being able to run for_each_online_cpu() is a pretty low-level
> and fundamental thing. It's something we're likely to want to do more
> and more of as time passes. It seems a bad thing to tell ourselves
> that we cannot use it in reclaim context. That blots out large chunks
> of filesystem and IO-layer code as well!
>
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -193,18 +193,16 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat,
> > int threshold;
> > int i;
> >
> > - get_online_cpus();
> > for (i = 0; i < pgdat->nr_zones; i++) {
> > zone = &pgdat->node_zones[i];
> > if (!zone->percpu_drift_mark)
> > continue;
> >
> > threshold = (*calculate_pressure)(zone);
> > - for_each_online_cpu(cpu)
> > + for_each_possible_cpu(cpu)
> > per_cpu_ptr(zone->pageset, cpu)->stat_threshold
> > = threshold;
> > }
> > - put_online_cpus();
> > }
>
> That's a pretty sad change IMO, especially of num_possible_cpus is much
> larger than num_online_cpus.
As far as I know, CPU hotplug is used server area and almost server have
ACPI or similar flexible firmware interface. then, num_possible_cpus is
not so much big than actual numbers of socket.
IOW, I haven't hear embedded people use cpu hotplug. If you've hear, please
let me know.
> What do we need to do to make get_online_cpus() safe to use in reclaim
> context? (And in kswapd context, if that's really equivalent to
> "reclaim context").
Hmm... It's too hard.
kmalloc() is called from everywhere and cpu hotplug is happen any time.
then, any lock design break your requested rule. ;)
And again, _now_ I don't think for_each_possible_cpu() is very costly.
next prev parent reply other threads:[~2010-11-23 8:32 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-27 8:47 [PATCH 0/2] Reduce the amount of time spent in watermark-related functions Mel Gorman
2010-10-27 8:47 ` [PATCH 1/2] mm: page allocator: Adjust the per-cpu counter threshold when memory is low Mel Gorman
2010-10-27 20:16 ` Christoph Lameter
2010-10-28 1:09 ` KAMEZAWA Hiroyuki
2010-10-28 9:49 ` Mel Gorman
2010-10-28 9:58 ` KAMEZAWA Hiroyuki
2010-11-14 8:53 ` [PATCH] set_pgdat_percpu_threshold() don't use for_each_online_cpu KOSAKI Motohiro
2010-11-15 10:26 ` Mel Gorman
2010-11-15 14:04 ` Christoph Lameter
2010-11-16 9:58 ` Mel Gorman
2010-11-17 0:07 ` Andrew Morton
2010-11-19 15:29 ` Christoph Lameter
2010-11-23 8:32 ` KOSAKI Motohiro [this message]
2010-11-01 7:06 ` [PATCH 1/2] mm: page allocator: Adjust the per-cpu counter threshold when memory is low KOSAKI Motohiro
2010-11-26 16:06 ` Kyle McMartin
2010-11-29 9:56 ` Mel Gorman
2010-11-29 13:16 ` Kyle McMartin
2010-11-29 15:08 ` Mel Gorman
2010-11-29 15:22 ` Kyle McMartin
2010-11-29 15:26 ` Kyle McMartin
2010-11-29 15:58 ` Mel Gorman
2010-12-23 22:18 ` David Rientjes
2010-12-23 22:35 ` Andrew Morton
2010-12-23 23:00 ` Kyle McMartin
2010-12-23 23:07 ` David Rientjes
2010-12-23 23:17 ` Andrew Morton
2010-10-27 8:47 ` [PATCH 2/2] mm: vmstat: Use a single setter function and callback for adjusting percpu thresholds Mel Gorman
2010-10-27 20:13 ` Christoph Lameter
2010-10-28 1:10 ` KAMEZAWA Hiroyuki
2010-11-01 7:06 ` KOSAKI Motohiro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101123172546.7BC5.A69D9226@jp.fujitsu.com \
--to=kosaki.motohiro@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=rientjes@google.com \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).