linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Mikulas Patocka <mpatocka@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	Christoph Hellwig <hch@lst.de>, Linux MM <linux-mm@kvack.org>,
	dm-devel@redhat.com, Ross Zwisler <ross.zwisler@linux.intel.com>,
	Laura Abbott <labbott@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [dm-devel] [PATCH] vmalloc: introduce vmap_pfn for persistent memory
Date: Thu, 9 Nov 2017 08:49:37 -0800	[thread overview]
Message-ID: <CAPcyv4jb4UW_qjzenyKCbbufSL0rHGBU4OHDQo9BH212Kjtppg@mail.gmail.com> (raw)
In-Reply-To: <alpine.LRH.2.02.1711091130070.9079@file01.intranet.prod.int.rdu2.redhat.com>

On Thu, Nov 9, 2017 at 8:37 AM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Wed, 8 Nov 2017, Dan Williams wrote:
>
>> On Wed, Nov 8, 2017 at 12:26 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>> > On Wed, 8 Nov 2017, Christoph Hellwig wrote:
>> >
>> >> Can you start by explaining what you actually need the vmap for?
>> >
>> > It is possible to use lvm on persistent memory. You can create linear or
>> > striped logical volumes on persistent memory and these volumes still have
>> > the direct_access method, so they can be mapped with the function
>> > dax_direct_access().
>> >
>> > If we create logical volumes on persistent memory, the method
>> > dax_direct_access() won't return the whole device, it will return only a
>> > part. When dax_direct_access() returns the whole device, my driver just
>> > uses it without vmap. When dax_direct_access() return only a part of the
>> > device, my driver calls it repeatedly to get all the parts and then
>> > assembles the parts into a linear address space with vmap.
>>
>> I know I proposed "call dax_direct_access() once" as a strawman for an
>> in-kernel driver user, but it's better to call it per access so you
>> can better stay in sync with base driver events like new media errors
>> and unplug / driver-unload. Either that, or at least have a plan how
>> to handle those events.
>
> Calling it on every access would be inacceptable performance overkill. How
> is it supposed to work anyway? - if something intends to move data on
> persistent memory while some driver accesse it, then we need two functions
> - dax_direct_access() and dax_relinquish_direct_access(). The current
> kernel lacks a function dax_relinquish_direct_access() that would mark a
> region of data as moveable, so we can't move the data anyway.

We take a global reference on the hosting device while pages are
registered, see the percpu_ref usage in kernel/memremap.c, and we hold
the dax_read_lock() over calls to dax_direct_access() to temporarily
hold the device alive for the duration of the call.

> BTW. what happens if we create a write bio that has its pages pointing to
> persistent memory and there is error when the storage controller attempts
> to do DMA from persistent memory? Will the storage controller react to the
> error in a sensible way and will the block layer report the error?

While pages are pinned for DMA the devm_memremap_pages() mapping is
pinned. Otherwise, an error reading persistent memory is identical to
an error reading DRAM.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-11-09 16:49 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-07 22:03 [PATCH] vmalloc: introduce vmap_pfn for persistent memory Mikulas Patocka
2017-11-08  9:59 ` Christoph Hellwig
2017-11-08 12:33   ` Mikulas Patocka
2017-11-08 15:04     ` Christoph Hellwig
2017-11-08 15:21       ` Mikulas Patocka
2017-11-08 15:35         ` [dm-devel] " Christoph Hellwig
2017-11-08 15:41           ` Dan Williams
2017-11-08 20:15             ` Mikulas Patocka
2017-11-08 20:25               ` Dan Williams
2017-11-09 16:40                 ` Mikulas Patocka
2017-11-09 16:45                   ` Dan Williams
2017-11-09 17:30                     ` Mikulas Patocka
2017-11-09 17:35                       ` Dan Williams
2017-11-08 17:42           ` Mikulas Patocka
2017-11-08 17:47             ` Christoph Hellwig
2017-11-08 20:26               ` Mikulas Patocka
2017-11-08 21:26                 ` Dan Williams
2017-11-09 16:37                   ` Mikulas Patocka
2017-11-09 16:49                     ` Dan Williams [this message]
2017-11-09 18:13                       ` Mikulas Patocka
2017-11-09 18:38                         ` Dan Williams
2017-11-09 18:51                           ` Mikulas Patocka
2017-11-09 18:58                             ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPcyv4jb4UW_qjzenyKCbbufSL0rHGBU4OHDQo9BH212Kjtppg@mail.gmail.com \
    --to=dan.j.williams@intel.com \
    --cc=dm-devel@redhat.com \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=labbott@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mpatocka@redhat.com \
    --cc=ross.zwisler@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).