From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi1-x243.google.com (mail-oi1-x243.google.com [IPv6:2607:f8b0:4864:20::243]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id AAD5D211D67E4 for ; Fri, 22 Mar 2019 11:32:24 -0700 (PDT) Received: by mail-oi1-x243.google.com with SMTP id x188so2428496oia.13 for ; Fri, 22 Mar 2019 11:32:24 -0700 (PDT) MIME-Version: 1.0 References: <155327387405.225273.9325594075351253804.stgit@dwillia2-desk3.amr.corp.intel.com> <20190322180532.GM32418@dhcp22.suse.cz> In-Reply-To: <20190322180532.GM32418@dhcp22.suse.cz> From: Dan Williams Date: Fri, 22 Mar 2019 11:32:11 -0700 Message-ID: Subject: Re: [PATCH v5 00/10] mm: Sub-section memory hotplug support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Michal Hocko Cc: linux-nvdimm , stable , Linux Kernel Mailing List , Linux MM , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Vlastimil Babka List-ID: On Fri, Mar 22, 2019 at 11:06 AM Michal Hocko wrote: > > On Fri 22-03-19 09:57:54, Dan Williams wrote: > > Changes since v4 [1]: > > - Given v4 was from March of 2017 the bulk of the changes result from > > rebasing the patch set from a v4.11-rc2 baseline to v5.1-rc1. > > > > - A unit test is added to ndctl to exercise the creation and dax > > mounting of multiple independent namespaces in a single 128M section. > > > > [1]: https://lwn.net/Articles/717383/ > > > > --- > > > > Quote patch7: > > > > "The libnvdimm sub-system has suffered a series of hacks and broken > > workarounds for the memory-hotplug implementation's awkward > > section-aligned (128MB) granularity. For example the following backtrace > > is emitted when attempting arch_add_memory() with physical address > > ranges that intersect 'System RAM' (RAM) with 'Persistent Memory' (PMEM) > > within a given section: > > > > WARNING: CPU: 0 PID: 558 at kernel/memremap.c:300 devm_memremap_pages+0x3b5/0x4c0 > > devm_memremap_pages attempted on mixed region [mem 0x200000000-0x2fbffffff flags 0x200] > > [..] > > Call Trace: > > dump_stack+0x86/0xc3 > > __warn+0xcb/0xf0 > > warn_slowpath_fmt+0x5f/0x80 > > devm_memremap_pages+0x3b5/0x4c0 > > __wrap_devm_memremap_pages+0x58/0x70 [nfit_test_iomap] > > pmem_attach_disk+0x19a/0x440 [nd_pmem] > > > > Recently it was discovered that the problem goes beyond RAM vs PMEM > > collisions as some platform produce PMEM vs PMEM collisions within a > > given section. The libnvdimm workaround for that case revealed that the > > libnvdimm section-alignment-padding implementation has been broken for a > > long while. A fix for that long-standing breakage introduces as many > > problems as it solves as it would require a backward-incompatible change > > to the namespace metadata interpretation. Instead of that dubious route > > [2], address the root problem in the memory-hotplug implementation." > > > > The approach is taken is to observe that each section already maintains > > an array of 'unsigned long' values to hold the pageblock_flags. A single > > additional 'unsigned long' is added to house a 'sub-section active' > > bitmask. Each bit tracks the mapped state of one sub-section's worth of > > capacity which is SECTION_SIZE / BITS_PER_LONG, or 2MB on x86-64. > > So the hotplugable unit is pageblock now, right? No, with this patchset the hotplug unit is 2MB. > Why is this sufficient? 2MB is sufficient because it allows mapping a namespace at PMD granularity and there is no practical need to go smaller. > What prevents new and creative HW to come up with alignements that do not fit there? There is a resource in hardware memory controllers called address-decode-registers that control the mapping granularity. The minimum granularity today is 64MB and the pressure as memory sizes increases is to make that granularity larger, not smaller. So the hardware pressure is going in the opposite direction of your concern, at least for persistent memory. User-defined memory namespaces have this problem, but 2MB is the default alignment and is sufficient for most uses. PCI Address BARs that are also mapped with devm_memremap_pages are aligned to their size and there is no expectation to support smaller than 2MB. All that said, to support a smaller sub-section granularity, just add more bits to the section-active bitmask. > Do not get me wrong but the section > as a unit is deeply carved into the memory hotplug and removing all those > assumptions is a major undertaking Right, as stated in the cover letter, this does not remove all those assumptions, it only removes the ones that impact devm_memremap_pages(). Specifying that sub-section is only supported in the 'want_memblock=false' case to arch_add_memory(). > and I would like to know that you are > not just shifting the problem to a smaller unit and a new/creative HW > will force us to go even more complicated. HW will not do this to us. It's software that has the problem. Namespace creation is unnecessarily constrained to 128MB alignment. I'm also open to exploring lifting the section alignment constraint for the 'want_memblock=true', but first things first. > What is the fundamental reason that pmem sections cannot be assigned > to a section aligned memory range? The physical address space is > quite large to impose 128MB sections IMHO. I thought this is merely a > configuration issue. 1) it's not just hardware that imposes this, software wants to be able to avoid the constraint 2) the flexibility of the memory controller initialization code is constrained by address-decode-registers. So while it is simple to say "just configure it to be aligned" it's not that easy in practice without throwing away usable memory capacity. > How often this really happens and how often it is unavoidable. Again, software can cause this problem at will. Multiple shipping systems expose this alignment problem in physical address space, for example: https://github.com/pmem/ndctl/issues/76 > > The implication of allowing sections to be piecemeal mapped/unmapped is > > that the valid_section() helper is no longer authoritative to determine > > if a section is fully mapped. Instead pfn_valid() is updated to consult > > the section-active bitmask. Given that typical memory hotplug still has > > deep "section" dependencies the sub-section capability is limited to > > 'want_memblock=false' invocations of arch_add_memory(), effectively only > > devm_memremap_pages() users for now. > > Does this mean that pfn_valid is more expensive now? How much? Negligible, the information to determine whether the sub-section is valid for a given pfn is in the same cacheline as the section-valid flag. > Also what about the section life time? Section is live as long as any sub-section is active. > Who is removing section now? Last arch_remove_memory() that removes the last sub-section clears out the remaining sub-section active bits. > I will probably have much more question, but it's friday and I am mostly > offline already. I would just like to hear much more about the new > design and resulting assumptions. Happy to accommodate this discussion. The section alignment has been an absolute horror to contend with. So I have years worth of pain to share for as deep as you want to go on probing why this is needed. _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 159C5C4360F for ; Fri, 22 Mar 2019 18:32:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CD47321900 for ; Fri, 22 Mar 2019 18:32:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="X9sR/kLV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728819AbfCVScZ (ORCPT ); Fri, 22 Mar 2019 14:32:25 -0400 Received: from mail-oi1-f194.google.com ([209.85.167.194]:34808 "EHLO mail-oi1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727050AbfCVScZ (ORCPT ); Fri, 22 Mar 2019 14:32:25 -0400 Received: by mail-oi1-f194.google.com with SMTP id u12so2474658oiv.1 for ; Fri, 22 Mar 2019 11:32:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CZvs5mYWn4e28+5uqtuggdqZURPu2axz//pTmg1YgmQ=; b=X9sR/kLVxAcxdmZN0UHPrNa633koQxEqpcG4gb12JmwQY3vBk42a9KgW1LkdMtdgBG eMoQOF6YmegccpR0Hhgp3eFAPD95IRdoXuDo8sAHmyxHI4Rog1V+a1tY1dKws79id/S4 p8kW+FyRvbTx5gSJSvS0W1v3zwdNXa1RRBQV0LDtTEymfbYstYB3XRhJhdJrGFSfxh0s ehhv0LG572q2+yLHW64XnApRdbJ0hrb3XDZpR66bCEHVMCz0kodunpDBFpnX8+fdQ5aJ Fc7mVerDbgMnaslHPTPPO4gzXV8T7TZJ9CoMobC4gnNrI42+F051nTe9j+5MgCsA+Z2S wIpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CZvs5mYWn4e28+5uqtuggdqZURPu2axz//pTmg1YgmQ=; b=cne9a351i3hgMBIDv+gQW9/71OYBijBdlmM7hvWAmXvIGrZpau1ils2goyowEzEdZJ S64hG3Af83iPAcc9YIh6513x4YcPGxkHFjGA6ToZNnktvhLuLRIklNXLtYiXMKaWvmSX 1Gjm/zdNbpnJIGC+28TgpP8qVYj8YaSFOXhYUZ0QXViMNS91NXouLd5qdrj+2tGOUDe8 FaO54xLNSApKKz5L4RaXmED/B4a3yjvcWPydGnHdy55TbLmQBb2ahSA7SdH4xocMY28+ tIEyPkBmRlrigZrbNu20SfxCB3Ol2pMxiFDPeQQQoRa7xec1tg/hR0kraWwy5kCPFunz 1Izw== X-Gm-Message-State: APjAAAW5VyKuKIMNuNztuXFJUfx52skYW9oetzLH5818SqX5Qlu/aWzL F5LOHG1tONvtyRTqLXaaoZ9fbHDoGLwxSf8LTfWZIg== X-Google-Smtp-Source: APXvYqyaxjmVYHMVmLnKbIoKtJ57Ezo1AXsNqWoTxJs1xvATGCnbzouZ9oBArTEcuig8G7tdpluUOvbw8FvdPq1n7O0= X-Received: by 2002:aca:aa57:: with SMTP id t84mr2925042oie.149.1553279543830; Fri, 22 Mar 2019 11:32:23 -0700 (PDT) MIME-Version: 1.0 References: <155327387405.225273.9325594075351253804.stgit@dwillia2-desk3.amr.corp.intel.com> <20190322180532.GM32418@dhcp22.suse.cz> In-Reply-To: <20190322180532.GM32418@dhcp22.suse.cz> From: Dan Williams Date: Fri, 22 Mar 2019 11:32:11 -0700 Message-ID: Subject: Re: [PATCH v5 00/10] mm: Sub-section memory hotplug support To: Michal Hocko Cc: Andrew Morton , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Logan Gunthorpe , Toshi Kani , Jeff Moyer , Vlastimil Babka , stable , Linux MM , linux-nvdimm , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 22, 2019 at 11:06 AM Michal Hocko wrote: > > On Fri 22-03-19 09:57:54, Dan Williams wrote: > > Changes since v4 [1]: > > - Given v4 was from March of 2017 the bulk of the changes result from > > rebasing the patch set from a v4.11-rc2 baseline to v5.1-rc1. > > > > - A unit test is added to ndctl to exercise the creation and dax > > mounting of multiple independent namespaces in a single 128M section. > > > > [1]: https://lwn.net/Articles/717383/ > > > > --- > > > > Quote patch7: > > > > "The libnvdimm sub-system has suffered a series of hacks and broken > > workarounds for the memory-hotplug implementation's awkward > > section-aligned (128MB) granularity. For example the following backtrace > > is emitted when attempting arch_add_memory() with physical address > > ranges that intersect 'System RAM' (RAM) with 'Persistent Memory' (PMEM) > > within a given section: > > > > WARNING: CPU: 0 PID: 558 at kernel/memremap.c:300 devm_memremap_pages+0x3b5/0x4c0 > > devm_memremap_pages attempted on mixed region [mem 0x200000000-0x2fbffffff flags 0x200] > > [..] > > Call Trace: > > dump_stack+0x86/0xc3 > > __warn+0xcb/0xf0 > > warn_slowpath_fmt+0x5f/0x80 > > devm_memremap_pages+0x3b5/0x4c0 > > __wrap_devm_memremap_pages+0x58/0x70 [nfit_test_iomap] > > pmem_attach_disk+0x19a/0x440 [nd_pmem] > > > > Recently it was discovered that the problem goes beyond RAM vs PMEM > > collisions as some platform produce PMEM vs PMEM collisions within a > > given section. The libnvdimm workaround for that case revealed that the > > libnvdimm section-alignment-padding implementation has been broken for a > > long while. A fix for that long-standing breakage introduces as many > > problems as it solves as it would require a backward-incompatible change > > to the namespace metadata interpretation. Instead of that dubious route > > [2], address the root problem in the memory-hotplug implementation." > > > > The approach is taken is to observe that each section already maintains > > an array of 'unsigned long' values to hold the pageblock_flags. A single > > additional 'unsigned long' is added to house a 'sub-section active' > > bitmask. Each bit tracks the mapped state of one sub-section's worth of > > capacity which is SECTION_SIZE / BITS_PER_LONG, or 2MB on x86-64. > > So the hotplugable unit is pageblock now, right? No, with this patchset the hotplug unit is 2MB. > Why is this sufficient? 2MB is sufficient because it allows mapping a namespace at PMD granularity and there is no practical need to go smaller. > What prevents new and creative HW to come up with alignements that do not fit there? There is a resource in hardware memory controllers called address-decode-registers that control the mapping granularity. The minimum granularity today is 64MB and the pressure as memory sizes increases is to make that granularity larger, not smaller. So the hardware pressure is going in the opposite direction of your concern, at least for persistent memory. User-defined memory namespaces have this problem, but 2MB is the default alignment and is sufficient for most uses. PCI Address BARs that are also mapped with devm_memremap_pages are aligned to their size and there is no expectation to support smaller than 2MB. All that said, to support a smaller sub-section granularity, just add more bits to the section-active bitmask. > Do not get me wrong but the section > as a unit is deeply carved into the memory hotplug and removing all those > assumptions is a major undertaking Right, as stated in the cover letter, this does not remove all those assumptions, it only removes the ones that impact devm_memremap_pages(). Specifying that sub-section is only supported in the 'want_memblock=false' case to arch_add_memory(). > and I would like to know that you are > not just shifting the problem to a smaller unit and a new/creative HW > will force us to go even more complicated. HW will not do this to us. It's software that has the problem. Namespace creation is unnecessarily constrained to 128MB alignment. I'm also open to exploring lifting the section alignment constraint for the 'want_memblock=true', but first things first. > What is the fundamental reason that pmem sections cannot be assigned > to a section aligned memory range? The physical address space is > quite large to impose 128MB sections IMHO. I thought this is merely a > configuration issue. 1) it's not just hardware that imposes this, software wants to be able to avoid the constraint 2) the flexibility of the memory controller initialization code is constrained by address-decode-registers. So while it is simple to say "just configure it to be aligned" it's not that easy in practice without throwing away usable memory capacity. > How often this really happens and how often it is unavoidable. Again, software can cause this problem at will. Multiple shipping systems expose this alignment problem in physical address space, for example: https://github.com/pmem/ndctl/issues/76 > > The implication of allowing sections to be piecemeal mapped/unmapped is > > that the valid_section() helper is no longer authoritative to determine > > if a section is fully mapped. Instead pfn_valid() is updated to consult > > the section-active bitmask. Given that typical memory hotplug still has > > deep "section" dependencies the sub-section capability is limited to > > 'want_memblock=false' invocations of arch_add_memory(), effectively only > > devm_memremap_pages() users for now. > > Does this mean that pfn_valid is more expensive now? How much? Negligible, the information to determine whether the sub-section is valid for a given pfn is in the same cacheline as the section-valid flag. > Also what about the section life time? Section is live as long as any sub-section is active. > Who is removing section now? Last arch_remove_memory() that removes the last sub-section clears out the remaining sub-section active bits. > I will probably have much more question, but it's friday and I am mostly > offline already. I would just like to hear much more about the new > design and resulting assumptions. Happy to accommodate this discussion. The section alignment has been an absolute horror to contend with. So I have years worth of pain to share for as deep as you want to go on probing why this is needed.