linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Huaisheng Ye <yehs1@lenovo.com>
To: akpm@linux-foundation.org, linux-mm@kvack.org
Cc: mhocko@suse.com, willy@infradead.org, vbabka@suse.cz,
	mgorman@techsingularity.net, pasha.tatashin@oracle.com,
	alexander.levin@verizon.com, hannes@cmpxchg.org,
	penguin-kernel@I-love.SAKURA.ne.jp, colyli@suse.de,
	chengnt@lenovo.com, hehy1@lenovo.com,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	Huaisheng Ye <yehs1@lenovo.com>
Subject: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone
Date: Tue,  8 May 2018 10:30:22 +0800	[thread overview]
Message-ID: <1525746628-114136-1-git-send-email-yehs1@lenovo.com> (raw)

Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
DEVICE zone, which is a virtual zone and both its start and end of pfn
are equal to 0, mm wouldna??t manage NVDIMM directly as DRAM, kernel uses
corresponding drivers, which locate at \drivers\nvdimm\ and
\drivers\acpi\nfit and fs, to realize NVDIMM memory alloc and free with
memory hot plug implementation.

With current kernel, many mma??s classical features like the buddy
system, swap mechanism and page cache couldna??t be supported to NVDIMM.
What we are doing is to expand kernel mma??s capacity to make it to handle
NVDIMM like DRAM. Furthermore we make mm could treat DRAM and NVDIMM
separately, that means mm can only put the critical pages to NVDIMM
zone, here we created a new zone type as NVM zone. That is to say for
traditional(or normal) pages which would be stored at DRAM scope like
Normal, DMA32 and DMA zones. But for the critical pages, which we hope
them could be recovered from power fail or system crash, we make them
to be persistent by storing them to NVM zone.

We installed two NVDIMMs to Lenovo Thinksystem product as development
platform, which has 125GB storage capacity respectively. With these
patches below, mm can create NVM zones for NVDIMMs.

Here is dmesg info,
 Initmem setup node 0 [mem 0x0000000000001000-0x000000237fffffff]
 On node 0 totalpages: 36879666
   DMA zone: 64 pages used for memmap
   DMA zone: 23 pages reserved
   DMA zone: 3999 pages, LIFO batch:0
 mminit::memmap_init Initialising map node 0 zone 0 pfns 1 -> 4096
   DMA32 zone: 10935 pages used for memmap
   DMA32 zone: 699795 pages, LIFO batch:31
 mminit::memmap_init Initialising map node 0 zone 1 pfns 4096 -> 1048576
   Normal zone: 53248 pages used for memmap
   Normal zone: 3407872 pages, LIFO batch:31
 mminit::memmap_init Initialising map node 0 zone 2 pfns 1048576 -> 4456448
   NVM zone: 512000 pages used for memmap
   NVM zone: 32768000 pages, LIFO batch:31
 mminit::memmap_init Initialising map node 0 zone 3 pfns 4456448 -> 37224448
 Initmem setup node 1 [mem 0x0000002380000000-0x00000046bfffffff]
 On node 1 totalpages: 36962304
   Normal zone: 65536 pages used for memmap
   Normal zone: 4194304 pages, LIFO batch:31
 mminit::memmap_init Initialising map node 1 zone 2 pfns 37224448 -> 41418752
   NVM zone: 512000 pages used for memmap
   NVM zone: 32768000 pages, LIFO batch:31
 mminit::memmap_init Initialising map node 1 zone 3 pfns 41418752 -> 74186752

This comes /proc/zoneinfo
Node 0, zone      NVM
  pages free     32768000
        min      15244
        low      48012
        high     80780
        spanned  32768000
        present  32768000
        managed  32768000
        protection: (0, 0, 0, 0, 0, 0)
        nr_free_pages 32768000
Node 1, zone      NVM
  pages free     32768000
        min      15244
        low      48012
        high     80780
        spanned  32768000
        present  32768000
        managed  32768000


Huaisheng Ye (6):
  mm/memblock: Expand definition of flags to support NVDIMM
  mm/page_alloc.c: get pfn range with flags of memblock
  mm, zone_type: create ZONE_NVM and fill into GFP_ZONE_TABLE
  arch/x86/kernel: mark NVDIMM regions from e820_table
  mm: get zone spanned pages separately for DRAM and NVDIMM
  arch/x86/mm: create page table mapping for DRAM and NVDIMM both

 arch/x86/include/asm/e820/api.h |  3 +++
 arch/x86/kernel/e820.c          | 20 +++++++++++++-
 arch/x86/kernel/setup.c         |  8 ++++++
 arch/x86/mm/init_64.c           | 16 +++++++++++
 include/linux/gfp.h             | 57 ++++++++++++++++++++++++++++++++++++---
 include/linux/memblock.h        | 19 +++++++++++++
 include/linux/mm.h              |  4 +++
 include/linux/mmzone.h          |  3 +++
 mm/Kconfig                      | 16 +++++++++++
 mm/memblock.c                   | 46 +++++++++++++++++++++++++++----
 mm/nobootmem.c                  |  5 ++--
 mm/page_alloc.c                 | 60 ++++++++++++++++++++++++++++++++++++++++-
 12 files changed, 245 insertions(+), 12 deletions(-)

-- 
1.8.3.1

             reply	other threads:[~2018-05-08  2:18 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-08  2:30 Huaisheng Ye [this message]
     [not found] ` <1525746628-114136-2-git-send-email-yehs1@lenovo.com>
2018-05-08  2:30   ` [External] [RFC PATCH v1 1/6] mm/memblock: Expand definition of flags to support NVDIMM Huaisheng HS1 Ye
2018-05-08  2:30 ` [RFC PATCH v1 4/6] arch/x86/kernel: mark NVDIMM regions from e820_table Huaisheng Ye
     [not found] ` <1525746628-114136-3-git-send-email-yehs1@lenovo.com>
2018-05-08  2:32   ` [External] [RFC PATCH v1 2/6] mm/page_alloc.c: get pfn range with flags of memblock Huaisheng HS1 Ye
     [not found] ` <1525746628-114136-4-git-send-email-yehs1@lenovo.com>
2018-05-08  2:33   ` [External] [RFC PATCH v1 3/6] mm, zone_type: create ZONE_NVM and fill into GFP_ZONE_TABLE Huaisheng HS1 Ye
2018-05-08  4:43     ` Randy Dunlap
2018-05-09  4:22       ` Huaisheng HS1 Ye
2018-05-09 11:47         ` Michal Hocko
2018-05-09 14:04           ` Huaisheng HS1 Ye
2018-05-09 20:56             ` Michal Hocko
2018-05-10  3:53               ` Huaisheng HS1 Ye
     [not found] ` <1525746628-114136-6-git-send-email-yehs1@lenovo.com>
2018-05-08  2:34   ` [External] [RFC PATCH v1 5/6] mm: get zone spanned pages separately for DRAM and NVDIMM Huaisheng HS1 Ye
     [not found] ` <1525746628-114136-7-git-send-email-yehs1@lenovo.com>
2018-05-08  2:35   ` [External] [RFC PATCH v1 6/6] arch/x86/mm: create page table mapping for DRAM and NVDIMM both Huaisheng HS1 Ye
2018-05-10  7:57 ` [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Michal Hocko
2018-05-10  8:41   ` Michal Hocko
  -- strict thread matches above, loose matches on Subject: below --
2018-05-08  2:00 Huaisheng Ye
2018-05-07 14:50 Huaisheng Ye
2018-05-07 18:46 ` Matthew Wilcox
2018-05-07 18:57   ` Dan Williams
2018-05-07 19:08     ` Jeff Moyer
2018-05-07 19:17       ` Dan Williams
2018-05-07 19:28         ` Jeff Moyer
2018-05-07 19:29           ` Dan Williams
2018-05-07 19:18     ` Matthew Wilcox
2018-05-07 19:30       ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1525746628-114136-1-git-send-email-yehs1@lenovo.com \
    --to=yehs1@lenovo.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.levin@verizon.com \
    --cc=chengnt@lenovo.com \
    --cc=colyli@suse.de \
    --cc=hannes@cmpxchg.org \
    --cc=hehy1@lenovo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=pasha.tatashin@oracle.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).