From: Thomas Gleixner <tglx@linutronix.de> To: Mel Gorman <mgorman@techsingularity.net>, Linux-MM <linux-mm@kvack.org> Cc: Linux-RT-Users <linux-rt-users@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, Chuck Lever <chuck.lever@oracle.com>, Jesper Dangaard Brouer <brouer@redhat.com>, Matthew Wilcox <willy@infradead.org>, Mel Gorman <mgorman@techsingularity.net>, Sebastian Andrzej Siewior <bigeasy@linutronix.de> Subject: Re: [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock Date: Wed, 31 Mar 2021 11:55:56 +0200 [thread overview] Message-ID: <877dln640j.ffs@nanos.tec.linutronix.de> (raw) In-Reply-To: <20210329120648.19040-3-mgorman@techsingularity.net> On Mon, Mar 29 2021 at 13:06, Mel Gorman wrote: > There is a lack of clarity of what exactly local_irq_save/local_irq_restore > protects in page_alloc.c . It conflates the protection of per-cpu page > allocation structures with per-cpu vmstat deltas. > > This patch protects the PCP structure using local_lock which > for most configurations is identical to IRQ enabling/disabling. > The scope of the lock is still wider than it should be but this is > decreased in later patches. The per-cpu vmstat deltas are protected by > preempt_disable/preempt_enable where necessary instead of relying on > IRQ disable/enable. Yes, this goes into the right direction and I really appreciate the scoped protection for clarity sake. > #ifdef CONFIG_MEMORY_HOTREMOVE > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 8a8f1a26b231..01b74ff73549 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -887,6 +887,7 @@ void cpu_vm_stats_fold(int cpu) > > pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu); > > + preempt_disable(); What's the reason for the preempt_disable() here? A comment would be appreciated. > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) > if (pzstats->vm_stat_diff[i]) { > int v; > @@ -908,6 +909,7 @@ void cpu_vm_stats_fold(int cpu) > global_numa_diff[i] += v; > } > #endif > + preempt_enable(); > } > > for_each_online_pgdat(pgdat) { > @@ -941,6 +943,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) > { > int i; > > + preempt_disable(); Same here. > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) > if (pzstats->vm_stat_diff[i]) { > int v = pzstats->vm_stat_diff[i]; > @@ -959,6 +962,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) > atomic_long_add(v, &vm_numa_stat[i]); > } > #endif > + preempt_enable(); Thanks, tglx
WARNING: multiple messages have this Message-ID (diff)
From: Thomas Gleixner <tglx@linutronix.de> To: Mel Gorman <mgorman@techsingularity.net>, Linux-MM <linux-mm@kvack.org> Cc: Linux-RT-Users <linux-rt-users@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, Chuck Lever <chuck.lever@oracle.com>, Jesper Dangaard Brouer <brouer@redhat.com>, Matthew Wilcox <willy@infradead.org>, Mel Gorman <mgorman@techsingularity.net>,Sebastian Andrzej Siewior <bigeasy@linutronix.de> Subject: Re: [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock Date: Wed, 31 Mar 2021 11:55:56 +0200 [thread overview] Message-ID: <877dln640j.ffs@nanos.tec.linutronix.de> (raw) In-Reply-To: <20210329120648.19040-3-mgorman@techsingularity.net> On Mon, Mar 29 2021 at 13:06, Mel Gorman wrote: > There is a lack of clarity of what exactly local_irq_save/local_irq_restore > protects in page_alloc.c . It conflates the protection of per-cpu page > allocation structures with per-cpu vmstat deltas. > > This patch protects the PCP structure using local_lock which > for most configurations is identical to IRQ enabling/disabling. > The scope of the lock is still wider than it should be but this is > decreased in later patches. The per-cpu vmstat deltas are protected by > preempt_disable/preempt_enable where necessary instead of relying on > IRQ disable/enable. Yes, this goes into the right direction and I really appreciate the scoped protection for clarity sake. > #ifdef CONFIG_MEMORY_HOTREMOVE > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 8a8f1a26b231..01b74ff73549 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -887,6 +887,7 @@ void cpu_vm_stats_fold(int cpu) > > pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu); > > + preempt_disable(); What's the reason for the preempt_disable() here? A comment would be appreciated. > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) > if (pzstats->vm_stat_diff[i]) { > int v; > @@ -908,6 +909,7 @@ void cpu_vm_stats_fold(int cpu) > global_numa_diff[i] += v; > } > #endif > + preempt_enable(); > } > > for_each_online_pgdat(pgdat) { > @@ -941,6 +943,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) > { > int i; > > + preempt_disable(); Same here. > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) > if (pzstats->vm_stat_diff[i]) { > int v = pzstats->vm_stat_diff[i]; > @@ -959,6 +962,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) > atomic_long_add(v, &vm_numa_stat[i]); > } > #endif > + preempt_enable(); Thanks, tglx
next prev parent reply other threads:[~2021-03-31 9:56 UTC|newest] Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-03-29 12:06 [RFC PATCH 0/6] Use local_lock for pcp protection and reduce stat overhead Mel Gorman 2021-03-29 12:06 ` [PATCH 1/6] mm/page_alloc: Split per cpu page lists and zone stats Mel Gorman 2021-03-29 12:06 ` [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock Mel Gorman 2021-03-31 9:55 ` Thomas Gleixner [this message] 2021-03-31 9:55 ` Thomas Gleixner 2021-03-31 11:01 ` Mel Gorman 2021-03-31 17:42 ` Thomas Gleixner 2021-03-31 17:46 ` Thomas Gleixner 2021-03-31 20:42 ` Mel Gorman 2021-03-29 12:06 ` [PATCH 3/6] mm/vmstat: Convert NUMA statistics to basic NUMA counters Mel Gorman 2021-03-29 12:06 ` [PATCH 4/6] mm/vmstat: Inline NUMA event counter updates Mel Gorman 2021-03-29 12:06 ` [PATCH 5/6] mm/page_alloc: Batch the accounting updates in the bulk allocator Mel Gorman 2021-03-29 12:06 ` [PATCH 6/6] mm/page_alloc: Reduce duration that IRQs are disabled for VM counters Mel Gorman 2021-03-30 18:51 ` [RFC PATCH 0/6] Use local_lock for pcp protection and reduce stat overhead Jesper Dangaard Brouer 2021-03-31 7:38 ` Mel Gorman 2021-03-31 8:17 ` Jesper Dangaard Brouer 2021-03-31 8:52 ` Mel Gorman 2021-03-31 9:51 ` Thomas Gleixner
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=877dln640j.ffs@nanos.tec.linutronix.de \ --to=tglx@linutronix.de \ --cc=bigeasy@linutronix.de \ --cc=brouer@redhat.com \ --cc=chuck.lever@oracle.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-rt-users@vger.kernel.org \ --cc=mgorman@techsingularity.net \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.