From: Matthew Wilcox <willy@infradead.org>
To: Christoph Lameter <cl@linux.com>
Cc: linux-mm@kvack.org, Pekka Enberg <penberg@cs.helsinki.fi>,
akpm@linux-foundation.org, Mel Gorman <mel@skynet.ie>,
andi@firstfloor.org, Rik van Riel <riel@redhat.com>
Subject: Re: [RFC 5/6] slub: Slab defrag core
Date: Tue, 7 Mar 2017 14:03:43 -0800 [thread overview]
Message-ID: <20170307220343.GV16328@bombadil.infradead.org> (raw)
In-Reply-To: <20170307212438.294581405@linux.com>
On Tue, Mar 07, 2017 at 03:24:34PM -0600, Christoph Lameter wrote:
> kmem_defrag_get_func (void *get(struct kmem_cache *s, int nr, void **objects))
>
> Must obtain a reference to the listed objects. SLUB guarantees that
> the objects are still allocated. However, other threads may be blocked
> in slab_free() attempting to free objects in the slab. These may succeed
> as soon as get() returns to the slab allocator. The function must
> be able to detect such situations and void the attempts to free such
> objects (by for example voiding the corresponding entry in the objects
> array).
>
> No slab operations may be performed in get(). Interrupts
> are disabled. What can be done is very limited. The slab lock
> for the page that contains the object is taken. Any attempt to perform
> a slab operation may lead to a deadlock.
>
> kmem_defrag_get_func returns a private pointer that is passed to
> kmem_defrag_kick_func(). Should we be unable to obtain all references
> then that pointer may indicate to the kick() function that it should
> not attempt any object removal or move but simply remove the
> reference counts.
I think calling it 'get' is overly prescriptive of how an implementation should
work. Perhaps 'test'? And returning ERR_PTR if we cannot free all objects?
> kmem_defrag_kick_func (void kick(struct kmem_cache *, int nr, void **objects,
> void *get_result))
>
> After SLUB has established references to the objects in a
> slab it will then drop all locks and use kick() to move objects out
> of the slab. The existence of the object is guaranteed by virtue of
> the earlier obtained references via kmem_defrag_get_func(). The
> callback may perform any slab operation since no locks are held at
> the time of call.
>
> The callback should remove the object from the slab in some way. This
> may be accomplished by reclaiming the object and then running
> kmem_cache_free() or reallocating it and then running
> kmem_cache_free(). Reallocation is advantageous because the partial
> slabs were just sorted to have the partial slabs with the most objects
> first. Reallocation is likely to result in filling up a slab in
> addition to freeing up one slab. A filled up slab can also be removed
> from the partial list. So there could be a double effect.
>
> kmem_defrag_kick_func() does not return a result. SLUB will check
> the number of remaining objects in the slab. If all objects were
> removed then the slab is freed and we have reduced the overall
> fragmentation of the slab cache.
I think 'kick' is a bad name. 'evict', maybe?
Also, xarray, dcache and the inode cache all use RCU to free objects, so
perhaps a sentence or two in here about that would be beneficial ...
If objects are freed to this slab using RCU, the evict function
should call rcu_barrier() before returning to ensure that all
objects have been returned and the slab page can be freed.
> + private = s->get(s, count, vector);
> +
> + /*
> + * Got references. Now we can drop the slab lock. The slab
> + * is frozen so it cannot vanish from under us nor will
> + * allocations be performed on the slab. However, unlocking the
> + * slab will allow concurrent slab_frees to proceed.
> + */
> + slab_unlock(page);
> + local_irq_restore(flags);
> +
> + /*
> + * Perform the KICK callbacks to remove the objects.
> + */
> + s->kick(s, count, vector, private);
private = s->test(vector, count);
slab_unlock(page);
local_irq_restore(flags);
if (!IS_ERR(private))
s->evict(vector, count, private);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-03-07 22:03 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-07 21:24 [RFC 0/6] Slab Fragmentation Reduction V16 Christoph Lameter
2017-03-07 21:24 ` [RFC 1/6] slub: Replace ctor field with ops field in /sys/slab/* Christoph Lameter
2017-03-07 21:24 ` [RFC 2/6] slub: Add defrag_ratio field and sysfs support Christoph Lameter
2017-03-07 21:24 ` [RFC 3/6] slub: Add get() and kick() methods Christoph Lameter
2017-03-07 21:24 ` [RFC 4/6] slub: Sort slab cache list and establish maximum objects for defrag slabs Christoph Lameter
2017-03-07 21:24 ` [RFC 5/6] slub: Slab defrag core Christoph Lameter
2017-03-07 22:03 ` Matthew Wilcox [this message]
2017-03-07 21:24 ` [RFC 6/6] slub: Extend slabinfo to support -D and -F options Christoph Lameter
2017-03-08 14:34 ` [RFC 0/6] Slab Fragmentation Reduction V16 Michal Hocko
2017-03-08 15:58 ` Christoph Lameter
2017-03-13 9:15 ` Michal Hocko
2017-03-13 9:16 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170307220343.GV16328@bombadil.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=cl@linux.com \
--cc=linux-mm@kvack.org \
--cc=mel@skynet.ie \
--cc=penberg@cs.helsinki.fi \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).