From: Dan Williams <dan.j.williams@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Huaisheng Ye <yehs1@lenovo.com>, Michal Hocko <mhocko@suse.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-nvdimm <linux-nvdimm@lists.01.org>,
Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
chengnt@lenovo.com, pasha.tatashin@oracle.com,
Sasha Levin <alexander.levin@verizon.com>,
Linux MM <linux-mm@kvack.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>,
colyli@suse.de, Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>,
Dave Hansen <dave.hansen@intel.com>
Subject: Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone
Date: Mon, 7 May 2018 11:57:10 -0700 [thread overview]
Message-ID: <CAPcyv4hBJN3npXwg3Ur32JSWtKvBUZh7F8W+Exx3BB-uKWwPag@mail.gmail.com> (raw)
In-Reply-To: <20180507184622.GB12361@bombadil.infradead.org>
On Mon, May 7, 2018 at 11:46 AM, Matthew Wilcox <willy@infradead.org> wrote:
> On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
>> Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
>> DEVICE zone, which is a virtual zone and both its start and end of pfn
>> are equal to 0, mm wouldn’t manage NVDIMM directly as DRAM, kernel uses
>> corresponding drivers, which locate at \drivers\nvdimm\ and
>> \drivers\acpi\nfit and fs, to realize NVDIMM memory alloc and free with
>> memory hot plug implementation.
>
> You probably want to let linux-nvdimm know about this patch set.
> Adding to the cc.
Yes, thanks for that!
> Also, I only received patch 0 and 4. What happened
> to 1-3,5 and 6?
>
>> With current kernel, many mm’s classical features like the buddy
>> system, swap mechanism and page cache couldn’t be supported to NVDIMM.
>> What we are doing is to expand kernel mm’s capacity to make it to handle
>> NVDIMM like DRAM. Furthermore we make mm could treat DRAM and NVDIMM
>> separately, that means mm can only put the critical pages to NVDIMM
>> zone, here we created a new zone type as NVM zone. That is to say for
>> traditional(or normal) pages which would be stored at DRAM scope like
>> Normal, DMA32 and DMA zones. But for the critical pages, which we hope
>> them could be recovered from power fail or system crash, we make them
>> to be persistent by storing them to NVM zone.
>>
>> We installed two NVDIMMs to Lenovo Thinksystem product as development
>> platform, which has 125GB storage capacity respectively. With these
>> patches below, mm can create NVM zones for NVDIMMs.
>>
>> Here is dmesg info,
>> Initmem setup node 0 [mem 0x0000000000001000-0x000000237fffffff]
>> On node 0 totalpages: 36879666
>> DMA zone: 64 pages used for memmap
>> DMA zone: 23 pages reserved
>> DMA zone: 3999 pages, LIFO batch:0
>> mminit::memmap_init Initialising map node 0 zone 0 pfns 1 -> 4096
>> DMA32 zone: 10935 pages used for memmap
>> DMA32 zone: 699795 pages, LIFO batch:31
>> mminit::memmap_init Initialising map node 0 zone 1 pfns 4096 -> 1048576
>> Normal zone: 53248 pages used for memmap
>> Normal zone: 3407872 pages, LIFO batch:31
>> mminit::memmap_init Initialising map node 0 zone 2 pfns 1048576 -> 4456448
>> NVM zone: 512000 pages used for memmap
>> NVM zone: 32768000 pages, LIFO batch:31
>> mminit::memmap_init Initialising map node 0 zone 3 pfns 4456448 -> 37224448
>> Initmem setup node 1 [mem 0x0000002380000000-0x00000046bfffffff]
>> On node 1 totalpages: 36962304
>> Normal zone: 65536 pages used for memmap
>> Normal zone: 4194304 pages, LIFO batch:31
>> mminit::memmap_init Initialising map node 1 zone 2 pfns 37224448 -> 41418752
>> NVM zone: 512000 pages used for memmap
>> NVM zone: 32768000 pages, LIFO batch:31
>> mminit::memmap_init Initialising map node 1 zone 3 pfns 41418752 -> 74186752
>>
>> This comes /proc/zoneinfo
>> Node 0, zone NVM
>> pages free 32768000
>> min 15244
>> low 48012
>> high 80780
>> spanned 32768000
>> present 32768000
>> managed 32768000
>> protection: (0, 0, 0, 0, 0, 0)
>> nr_free_pages 32768000
>> Node 1, zone NVM
>> pages free 32768000
>> min 15244
>> low 48012
>> high 80780
>> spanned 32768000
>> present 32768000
>> managed 32768000
I think adding yet one more mm-zone is the wrong direction. Instead,
what we have been considering is a mechanism to allow a device-dax
instance to be given back to the kernel as a distinct numa node
managed by the VM. It seems it times to dust off those patches.
next prev parent reply other threads:[~2018-05-07 18:57 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-07 14:50 [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Huaisheng Ye
2018-05-07 14:50 ` [RFC PATCH v1 4/6] arch/x86/kernel: mark NVDIMM regions from e820_table Huaisheng Ye
2018-05-07 18:46 ` [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Matthew Wilcox
2018-05-07 18:57 ` Dan Williams [this message]
2018-05-07 19:08 ` Jeff Moyer
2018-05-07 19:17 ` Dan Williams
2018-05-07 19:28 ` Jeff Moyer
2018-05-07 19:29 ` Dan Williams
2018-05-08 2:59 ` [External] " Huaisheng HS1 Ye
2018-05-08 3:09 ` Matthew Wilcox
2018-05-09 4:47 ` Huaisheng HS1 Ye
2018-05-10 16:27 ` Matthew Wilcox
2018-05-15 16:07 ` Huaisheng HS1 Ye
2018-05-15 16:20 ` Matthew Wilcox
2018-05-16 2:05 ` Huaisheng HS1 Ye
2018-05-16 2:48 ` Dan Williams
2018-05-16 8:33 ` Huaisheng HS1 Ye
2018-05-16 2:52 ` Matthew Wilcox
2018-05-16 4:10 ` Dan Williams
2018-05-08 3:52 ` Dan Williams
2018-05-07 19:18 ` Matthew Wilcox
2018-05-07 19:30 ` Dan Williams
2018-05-08 0:54 ` [External] " Huaisheng HS1 Ye
2018-05-08 2:00 Huaisheng Ye
2018-05-08 2:30 Huaisheng Ye
2018-05-10 7:57 ` Michal Hocko
2018-05-10 8:41 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAPcyv4hBJN3npXwg3Ur32JSWtKvBUZh7F8W+Exx3BB-uKWwPag@mail.gmail.com \
--to=dan.j.williams@intel.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.levin@verizon.com \
--cc=chengnt@lenovo.com \
--cc=colyli@suse.de \
--cc=dave.hansen@intel.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@oracle.com \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=yehs1@lenovo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).