All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Jann Horn <jannh@google.com>
Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc()
Date: Tue, 25 May 2021 16:10:08 +0100	[thread overview]
Message-ID: <20210525151008.GV30378@techsingularity.net> (raw)
In-Reply-To: <f2e9187a-dea8-ef55-b815-9ac295b46919@suse.cz>

On Tue, May 25, 2021 at 02:47:10PM +0200, Vlastimil Babka wrote:
> On 5/25/21 2:35 PM, Mel Gorman wrote:
> > On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote:
> >> Currently __slab_alloc() disables irqs around the whole ___slab_alloc().  This
> >> includes cases where this is not needed, such as when the allocation ends up in
> >> the page allocator and has to awkwardly enable irqs back based on gfp flags.
> >> Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when
> >> it hits the __slab_alloc() slow path, and long periods with disabled interrupts
> >> are undesirable.
> >> 
> >> As a first step towards reducing irq disabled periods, move irq handling into
> >> ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer
> >> from becoming invalid via migrate_disable(). This does not protect against
> >> access preemption, which is still done by disabled irq for most of
> >> ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() call from
> >> ___slab_alloc() is now done with irqs enabled.
> >> 
> >> kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them
> >> before calling ___slab_alloc(), which then disables them at its discretion. The
> >> whole kmem_cache_alloc_bulk() operation also disables cpu migration.
> >> 
> >> When  ___slab_alloc() calls new_slab() to allocate a new page, re-enable
> >> preemption, because new_slab() will re-enable interrupts in contexts that allow
> >> blocking.
> >> 
> >> The patch itself will thus increase overhead a bit due to disabled migration
> >> and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will
> >> be gradually improved in the following patches.
> >> 
> >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> > 
> > Why did you use migrate_disable instead of preempt_disable? There is a
> > fairly large comment in include/linux/preempt.h on why migrate_disable
> > is undesirable so new users are likely to be put under the microscope
> > once Thomas or Peter notice it.
> 
> I understood it as while undesirable, there's nothing better for now.
> 

I think the "better" option is to reduce preempt_disable sections as
much as possible but you probably have limited options there. It might
be easier to justify if the sections you were protecting need to go to
sleep like what mm/highmem.c needs but that does not appear to be the case.

> > I think you are using it so that an allocation request can be preempted by
> > a higher priority task but given that the code was disabling interrupts,
> > there was already some preemption latency.
> 
> Yes, and the disabled interrupts will get progressively "smaller" in the series.
> 
> > However, migrate_disable
> > is more expensive than preempt_disable (function call versus a simple
> > increment).
> 
> That's true, I think perhaps it could be reimplemented so that on !PREEMPT_RT
> and with no lockdep/preempt/whatnot debugging it could just translate to an
> inline migrate_disable?
> 

It might be a bit too large for that.

> > On that basis, I'd recommend starting with preempt_disable
> > and only using migrate_disable if necessary.
> 
> That's certainly possible and you're right it would be a less disruptive step.
> My thinking was that on !PREEMPT_RT it's actually just preempt_disable (however
> with the call overhead currently), but PREEMPT_RT would welcome the lack of
> preempt disable. I'd be interested to hear RT guys opinion here.
> 

It does more than preempt_disable even on !PREEMPT_RT. It's only on !SMP
that it becomes inline. While it might allow a higher priority task to
preempt, PREEMPT_RT is also not the common case and I think it's better
to use the lighter-weight option for the majority of configurations.

> > Bonus points for adding a comment where ___slab_alloc disables IRQs to
> > clarify what is protected -- I assume it's protecting kmem_cache_cpu
> > from being modified from interrupt context. If so, it's potentially a
> > local_lock candidate.
> 
> Yeah that gets cleared up later :)
> 

I saw that after glancing through the rest of the series. While I didn't
spot anything major, I'd also like to hear from Peter or Thomas on whether
migrate_disable or preempt_disable would be preferred for mm/slub.c. The
preempt-rt tree does not help answer the question given that the slub
changes there are mostly about deferring some work until IRQs are enabled.

-- 
Mel Gorman
SUSE Labs

  reply	other threads:[~2021-05-25 15:10 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24 23:39 [RFC 00/26] SLUB: use local_lock for kmem_cache_cpu protection and reduce disabling irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 01/26] mm, slub: allocate private object map for sysfs listings Vlastimil Babka
2021-05-25  8:06   ` Christoph Lameter
2021-05-25  8:06     ` Christoph Lameter
2021-05-25 10:13   ` Mel Gorman
2021-05-24 23:39 ` [RFC 02/26] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka
2021-05-25  8:09   ` Christoph Lameter
2021-05-25  8:09     ` Christoph Lameter
2021-05-25 10:17   ` Mel Gorman
2021-05-25 10:36     ` Vlastimil Babka
2021-05-25 11:33       ` Mel Gorman
2021-06-08 10:37         ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 03/26] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka
2021-05-25 10:24   ` Mel Gorman
2021-05-24 23:39 ` [RFC 04/26] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka
2021-05-25 11:47   ` Mel Gorman
2021-05-24 23:39 ` [RFC 05/26] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka
2021-05-25  9:03   ` Christoph Lameter
2021-05-25  9:03     ` Christoph Lameter
2021-05-25 11:54   ` Mel Gorman
2021-05-24 23:39 ` [RFC 06/26] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka
2021-05-25  9:06   ` Christoph Lameter
2021-05-25  9:06     ` Christoph Lameter
2021-05-25 11:59   ` Mel Gorman
2021-05-24 23:39 ` [RFC 07/26] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka
2021-05-25  9:12   ` Christoph Lameter
2021-05-25  9:12     ` Christoph Lameter
2021-06-08 10:48     ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 08/26] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka
2021-05-25 12:09   ` Mel Gorman
2021-05-24 23:39 ` [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka
2021-05-25 12:35   ` Mel Gorman
2021-05-25 12:47     ` Vlastimil Babka
2021-05-25 15:10       ` Mel Gorman [this message]
2021-05-25 17:24       ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 10/26] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Vlastimil Babka
2021-05-25 13:04   ` Mel Gorman
2021-06-08 12:13     ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 11/26] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka
2021-05-25 16:00   ` Jann Horn
2021-05-25 16:00     ` Jann Horn
2021-05-24 23:39 ` [RFC 12/26] mm, slub: restore irqs around calling new_slab() Vlastimil Babka
2021-05-24 23:39 ` [RFC 13/26] mm, slub: validate partial and newly allocated slabs before loading them Vlastimil Babka
2021-05-24 23:39 ` [RFC 14/26] mm, slub: check new pages with restored irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 15/26] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka
2021-05-24 23:39 ` [RFC 16/26] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka
2021-05-24 23:39 ` [RFC 17/26] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka
2021-05-24 23:39 ` [RFC 18/26] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 19/26] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 20/26] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka
2021-05-24 23:39 ` [RFC 21/26] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 22/26] mm, slub: detach percpu partial list in unfreeze_partials() using this_cpu_cmpxchg() Vlastimil Babka
2021-05-24 23:39 ` [RFC 23/26] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 24/26] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka
2021-05-24 23:39 ` [RFC 25/26] mm, slub: use migrate_disable() in put_cpu_partial() Vlastimil Babka
2021-05-25 15:33   ` Jann Horn
2021-05-25 15:33     ` Jann Horn
2021-06-09  8:41     ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 26/26] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka
2021-05-25 16:11   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210525151008.GV30378@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=bigeasy@linutronix.de \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.