All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: "linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>
Subject: Re: [PATCH] libnvdimm: fix clear poison locking with spinlock and mempool
Date: Tue, 11 Apr 2017 14:34:19 -0700	[thread overview]
Message-ID: <CAPcyv4iedwXh7OsCbooooPo2Txp8hUS-NXEHHsRfqmN3s8e7RA@mail.gmail.com> (raw)
In-Reply-To: <149194542385.32517.16332902766093858868.stgit@djiang5-desk3.ch.intel.com>

On Tue, Apr 11, 2017 at 2:17 PM, Dave Jiang <dave.jiang@intel.com> wrote:
> The following warning results from holding a lane spinlock,
> preempt_disable(), or the btt map spinlock and then trying to take the
> reconfig_mutex to walk the poison list and potentially add new entries.
>
> BUG: sleeping function called from invalid context at kernel/locking/mutex.
> c:747
> in_atomic(): 1, irqs_disabled(): 0, pid: 17159, name: dd
> [..]
> Call Trace:
> dump_stack+0x85/0xc8
> ___might_sleep+0x184/0x250
> __might_sleep+0x4a/0x90
> __mutex_lock+0x58/0x9b0
> ? nvdimm_bus_lock+0x21/0x30 [libnvdimm]
> ? __nvdimm_bus_badblocks_clear+0x2f/0x60 [libnvdimm]
> ? acpi_nfit_forget_poison+0x79/0x80 [nfit]
> ? _raw_spin_unlock+0x27/0x40
> mutex_lock_nested+0x1b/0x20
> nvdimm_bus_lock+0x21/0x30 [libnvdimm]
> nvdimm_forget_poison+0x25/0x50 [libnvdimm]
> nvdimm_clear_poison+0x106/0x140 [libnvdimm]
> nsio_rw_bytes+0x164/0x270 [libnvdimm]
> btt_write_pg+0x1de/0x3e0 [nd_btt]
> ? blk_queue_enter+0x30/0x290
> btt_make_request+0x11a/0x310 [nd_btt]
> ? blk_queue_enter+0xb7/0x290
> ? blk_queue_enter+0x30/0x290
> generic_make_request+0x118/0x3b0
>
> Two things are done to address this issue. First, we introduce a spinlock to
> protect the poison list. This allows us to not having to acquire the
> reconfig_mutex for touching the poison list. Second, we introduce a mempool
> for the poison list entry allocation. This provides a pool of entries we can
> allocate from. Once we run out, we will acquire entries via GFP_NOWAIT to
> avoid the sleep with GFP_KERNEL allocation.
>
[..]

Wait, if we're just going to GFP_NOWAIT in the end, why even bother
with a mempool?

> +
> +       poison_mempool = mempool_create_slab_pool(SZ_4K, poison_cache);

...otherwise if you want to use a mempool I think it should be
minimally sized to cover the number of entries a single i/o can
generate. I don't think we'll ever exhaust 4096 idle nd_poison
entries.

If we're ok with the memory allocation failing, which I think we are,
we can just keep it simple with kzalloc(GFP_NOWAIT).
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

      reply	other threads:[~2017-04-11 21:34 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-11 21:17 [PATCH] libnvdimm: fix clear poison locking with spinlock and mempool Dave Jiang
2017-04-11 21:34 ` Dan Williams [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4iedwXh7OsCbooooPo2Txp8hUS-NXEHHsRfqmN3s8e7RA@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.