All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Yu Zhao <yuzhao@google.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: avoid slub allocation while holding list_lock
Date: Tue, 10 Sep 2019 12:16:24 +0300	[thread overview]
Message-ID: <20190910091624.3knf6mzorkki67nb@box.shutemov.name> (raw)
In-Reply-To: <20190909213938.GA53078@google.com>

On Mon, Sep 09, 2019 at 03:39:38PM -0600, Yu Zhao wrote:
> On Tue, Sep 10, 2019 at 05:57:22AM +0900, Tetsuo Handa wrote:
> > On 2019/09/10 1:00, Kirill A. Shutemov wrote:
> > > On Mon, Sep 09, 2019 at 12:10:16AM -0600, Yu Zhao wrote:
> > >> If we are already under list_lock, don't call kmalloc(). Otherwise we
> > >> will run into deadlock because kmalloc() also tries to grab the same
> > >> lock.
> > >>
> > >> Instead, allocate pages directly. Given currently page->objects has
> > >> 15 bits, we only need 1 page. We may waste some memory but we only do
> > >> so when slub debug is on.
> > >>
> > >>   WARNING: possible recursive locking detected
> > >>   --------------------------------------------
> > >>   mount-encrypted/4921 is trying to acquire lock:
> > >>   (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437
> > >>
> > >>   but task is already holding lock:
> > >>   (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb
> > >>
> > >>   other info that might help us debug this:
> > >>    Possible unsafe locking scenario:
> > >>
> > >>          CPU0
> > >>          ----
> > >>     lock(&(&n->list_lock)->rlock);
> > >>     lock(&(&n->list_lock)->rlock);
> > >>
> > >>    *** DEADLOCK ***
> > >>
> > >> Signed-off-by: Yu Zhao <yuzhao@google.com>
> > > 
> > > Looks sane to me:
> > > 
> > > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > 
> > 
> > Really?
> > 
> > Since page->objects is handled as bitmap, alignment should be BITS_PER_LONG
> > than BITS_PER_BYTE (though in this particular case, get_order() would
> > implicitly align BITS_PER_BYTE * PAGE_SIZE). But get_order(0) is an
> > undefined behavior.
> 
> I think we can safely assume PAGE_SIZE is unsigned long aligned and
> page->objects is non-zero.

I think it's better to handle page->objects == 0 gracefully. It should not
happen, but this code handles situation that should not happen.

-- 
 Kirill A. Shutemov

  parent reply	other threads:[~2019-09-10  9:16 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-09  6:10 [PATCH] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-09  6:10 ` Yu Zhao
2019-09-09 16:00 ` Kirill A. Shutemov
2019-09-09 20:57   ` Tetsuo Handa
2019-09-09 21:39     ` Yu Zhao
2019-09-10  1:41       ` Tetsuo Handa
2019-09-10  2:16         ` Yu Zhao
2019-09-10  9:16       ` Kirill A. Shutemov [this message]
2019-09-11 14:13 ` Andrew Morton
2019-09-12  0:29   ` [PATCH 1/3] mm: correct mask size for slub page->objects Yu Zhao
2019-09-12  0:29     ` Yu Zhao
2019-09-12  0:29     ` [PATCH 2/3] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-12  0:29       ` Yu Zhao
2019-09-12  0:44       ` Kirill A. Shutemov
2019-09-12  1:31         ` Yu Zhao
2019-09-12  2:31         ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Yu Zhao
2019-09-12  2:31           ` Yu Zhao
2019-09-12  2:31           ` [PATCH v2 2/4] mm: clean up validate_slab() Yu Zhao
2019-09-12  2:31             ` Yu Zhao
2019-09-12  9:46             ` Kirill A. Shutemov
2019-09-12  2:31           ` [PATCH v2 3/4] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-12  2:31             ` Yu Zhao
2019-09-12 10:04             ` Kirill A. Shutemov
2019-09-12  2:31           ` [PATCH v2 4/4] mm: lock slub page when listing objects Yu Zhao
2019-09-12  2:31             ` Yu Zhao
2019-09-12 10:06             ` Kirill A. Shutemov
2019-09-12 21:12               ` Yu Zhao
2019-09-13 14:58             ` Christopher Lameter
2019-09-13 14:58               ` Christopher Lameter
2019-09-12  9:40           ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Kirill A. Shutemov
2019-09-12 21:11             ` Yu Zhao
2019-09-12 22:03               ` Kirill A. Shutemov
2019-09-14  0:07           ` [PATCH v3 1/2] mm: clean up validate_slab() Yu Zhao
2019-09-14  0:07             ` Yu Zhao
2019-09-14  0:07             ` [PATCH v3 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-14  0:07               ` Yu Zhao
2019-09-16  8:39             ` [PATCH v3 1/2] mm: clean up validate_slab() Kirill A. Shutemov
2019-11-08 19:39             ` [PATCH v4 " Yu Zhao
2019-11-08 19:39               ` Yu Zhao
2019-11-08 19:39               ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-11-08 19:39                 ` Yu Zhao
2019-11-09 20:52                 ` Christopher Lameter
2019-11-09 20:52                   ` Christopher Lameter
2019-11-09 23:01                   ` Yu Zhao
2019-11-09 23:16                     ` Christopher Lameter
2019-11-09 23:16                       ` Christopher Lameter
2019-11-10 18:47                       ` Yu Zhao
2019-11-11 15:47                         ` Christopher Lameter
2019-11-11 15:47                           ` Christopher Lameter
2019-11-11 15:55                           ` [FIX] slub: Remove kmalloc under list_lock from list_slab_objects() V2 Christopher Lameter
2019-11-11 15:55                             ` Christopher Lameter
2019-11-30 23:09                             ` Andrew Morton
2019-12-01  1:17                               ` Tetsuo Handa
2019-12-02 15:12                               ` Christopher Lameter
2019-12-02 15:12                                 ` Christopher Lameter
2019-12-07 22:03                                 ` Yu Zhao
2020-01-10 14:11                                   ` Vlastimil Babka
2020-01-12 11:03                                     ` Tetsuo Handa
2020-01-13  1:34                                       ` Christopher Lameter
2020-01-13  1:34                                         ` Christopher Lameter
2019-11-11 18:15                           ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Shakeel Butt
2019-11-11 18:15                             ` Shakeel Butt
2019-09-12  0:29     ` [PATCH 3/3] mm: lock slub page when listing objects Yu Zhao
2019-09-12  0:29       ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190910091624.3knf6mzorkki67nb@box.shutemov.name \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.