From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 0EF602116DA25 for ; Mon, 22 Oct 2018 13:18:35 -0700 (PDT) Subject: [PATCH 0/9] Allow persistent memory to be used like normal RAM From: Dave Hansen Date: Mon, 22 Oct 2018 13:13:17 -0700 Message-Id: <20181022201317.8558C1D8@viggo.jf.intel.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: linux-kernel@vger.kernel.org Cc: thomas.lendacky@amd.com, mhocko@suse.com, linux-nvdimm@lists.01.org, Dave Hansen , ying.huang@intel.com, linux-mm@kvack.org, zwisler@kernel.org, fengguang.wu@intel.com, akpm@linux-foundation.org List-ID: Persistent memory is cool. But, currently, you have to rewrite your applications to use it. Wouldn't it be cool if you could just have it show up in your system like normal RAM and get to it like a slow blob of memory? Well... have I got the patch series for you! This series adds a new "driver" to which pmem devices can be attached. Once attached, the memory "owned" by the device is hot-added to the kernel and managed like any other memory. On systems with an HMAT (a new ACPI table), each socket (roughly) will have a separate NUMA node for its persistent memory so this newly-added memory can be selected by its unique NUMA node. This is highly RFC, and I really want the feedback from the nvdimm/pmem folks about whether this is a viable long-term perversion of their code and device mode. It's insufficiently documented and probably not bisectable either. Todo: 1. The device re-binding hacks are ham-fisted at best. We need a better way of doing this, especially so the kmem driver does not get in the way of normal pmem devices. 2. When the device has no proper node, we default it to NUMA node 0. Is that OK? 3. We muck with the 'struct resource' code quite a bit. It definitely needs a once-over from folks more familiar with it than I. 4. Is there a better way to do this than starting with a copy of pmem.c? Here's how I set up a system to test this thing: 1. Boot qemu with lots of memory: "-m 4096", for instance 2. Reserve 512MB of physical memory. Reserving a spot a 2GB physical seems to work: memmap=512M!0x0000000080000000 This will end up looking like a pmem device at boot. 3. When booted, convert fsdax device to "device dax": ndctl create-namespace -fe namespace0.0 -m dax 4. In the background, the kmem driver will probably bind to the new device. 5. Now, online the new memory sections. Perhaps: grep ^MemTotal /proc/meminfo for f in `grep -vl online /sys/devices/system/memory/*/state`; do echo $f: `cat $f` echo online > $f grep ^MemTotal /proc/meminfo done Cc: Dan Williams Cc: Dave Jiang Cc: Ross Zwisler Cc: Vishal Verma Cc: Tom Lendacky Cc: Andrew Morton Cc: Michal Hocko Cc: linux-nvdimm@lists.01.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Huang Ying Cc: Fengguang Wu _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm