nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-nvdimm@lists.01.org
Subject: Re: [PATCH v2] mm: disallow mapping that conflict for devm_memremap_pages()
Date: Wed, 18 Jul 2018 11:36:21 -0700	[thread overview]
Message-ID: <20180718183621.GE4949@bombadil.infradead.org> (raw)
In-Reply-To: <x49efg04cx8.fsf@segfault.boston.devel.redhat.com>

On Wed, Jul 18, 2018 at 02:27:31PM -0400, Jeff Moyer wrote:
> Hi, Dave,
> Dave Jiang <dave.jiang@intel.com> writes:
> > When pmem namespaces created are smaller than section size, this can cause
> > issue during removal and gpf was observed:
> >
> > Add code to check whether we have mapping already in the same section and
> > prevent additional mapping from created if that is the case.
> >
> > Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> > ---
> >
> > v2: Change dev_warn() to dev_WARN() to provide helpful backtrace. (Robert E)
> OK, I can reproduce the issue.  What I don't like about your patch is
> that you can still get yourself into trouble.  Just create a namespace
> with a size that isn't aligned to 128MB, and then all further
> create-namespace operations will fail.  The only "fix" is to delete the
> odd-sized namespace and try again.  And that warning message doesn't
> really help the administrator to figure this out.
> Why can't we simply round up to the next section automatically?  Either
> that, or have the kernel export a minimum namespace size of 128MB, and
> have ndctl enforce it?  I know we had some requests for 4MB namespaces,
> but it doesn't sound like those will be very useful if they're going to
> waste 124MB of space.
> Or, we could try to fix this problem of having multiple namespace
> co-exist in the same memblock section.  That seems like the most obvious
> fix, but there must be a reason you didn't pursue it.
> Dave, what do you think is the most viable option?

Just as a reminder, the desire for small pmem devices comes from cloud
usecases where you have teeny tiny layers, each of which might contain a
single package (eg a webserver or a database).  Because you're going to
run tens of thousands of instances, you don't want each machine to keep
a copy of the program text in pagecache; you want to have it in-memory
once and then DAX-map it in each guest.

While it's OK to waste a certain amount of each guest's physical memory,
when you have hundreds or thousands of these tiny layers, it adds up.
Linux-nvdimm mailing list

  reply	other threads:[~2018-07-18 18:36 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-15 20:33 [PATCH v2] mm: disallow mapping that conflict for devm_memremap_pages() Dave Jiang
2018-07-17 21:03 ` Dave Jiang
2018-07-17 21:37 ` Andrew Morton
2018-07-17 21:42   ` Dave Jiang
2018-07-18 18:27 ` Jeff Moyer
2018-07-18 18:36   ` Matthew Wilcox [this message]
2018-07-18 19:23   ` Dave Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180718183621.GE4949@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=jmoyer@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).