From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99AFAC49ED7 for ; Mon, 16 Sep 2019 17:46:24 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F903214D9 for ; Mon, 16 Sep 2019 17:46:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="sGHgEALC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F903214D9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 0356F202BDC9D; Mon, 16 Sep 2019 10:45:53 -0700 (PDT) Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=2607:f8b0:4864:20::344; helo=mail-ot1-x344.google.com; envelope-from=dan.j.williams@intel.com; receiver=linux-nvdimm@lists.01.org Received: from mail-ot1-x344.google.com (mail-ot1-x344.google.com [IPv6:2607:f8b0:4864:20::344]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 1BA1F202BB9AC for ; Mon, 16 Sep 2019 10:45:50 -0700 (PDT) Received: by mail-ot1-x344.google.com with SMTP id g19so565825otg.13 for ; Mon, 16 Sep 2019 10:46:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QMOOvJcCOxvJGsQgO09ry9PIYgCnyHRaKHcmKjPMZDE=; b=sGHgEALCuEWB5ciy5vD0LBEP7OLBzMNFaPlQpilKVRzFeMz5JJHbuFQLj4PpNzUxPp FPdyG5nmqrAHieCHI3kNe4+51PPvluEc2panvSJRp/Vr78fXeOMN9J9AO8MNb2YDHmNY /B/x8b6XVp+VTZMNTlc4LHA6oWdGlrcPKrTrl0BLRV0W+5L1oFNWOqLJX5YXtM0en7/r orWUlsrnBAdgXUkqvm50ASXBbVpsUCnJMXSFFyNz3g675XJaH9ifvjpIjcc7Zpnm2tHU AHRzvUC/EUHEfUhb6CDI+p2UWpvUDtd/dXlfR4skq7vfp6oCyOXgqn9GU1VDcWJKaHL5 3piQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QMOOvJcCOxvJGsQgO09ry9PIYgCnyHRaKHcmKjPMZDE=; b=uZCvv955shzEfYDSIt1dCNP5hlc6KToIjpP/PW4qMfbFJvs9q2uj9d1uiYmxYpDvp/ 6lxgnk+FT6kuJpy7HBzLpnpnbOCsXk3hWUvbMEVcMf56cQoUzwE7538tYtGUsBNnZokG /EB/szvEHQ/cF/pndvz21vEvdfcYbCyY9br657jdBSKsvjYaanG+iqmv/LVdAMYoH+Ix cTVxtap0p74mc60jygenutVsTNAYwVyitSvh26CN8MMrELMxvX2ti/lyEu5BkbnnJ9nk H3WJ5LyyvuEjs/j5qco+3I52bfEFVeuvwERl4vwwlI5eNA2JbAnwd9bbEEUIn05WyMKT aCow== X-Gm-Message-State: APjAAAVKN2de5bzAF3PxlFGoe1A7HyEuOwLPygjPm5Ww5GE28vMc8G1u JPslv7Mg+wV+8/lwOsSx9VD59Il6tg2hIun1PJ31+Q== X-Google-Smtp-Source: APXvYqz8wwo7dB4Y3wRdLSUJkJjWQZmRljho5XvoU+unxbG7cxvDpcHYJWPfu6CFzQU4tdjicuiHDUtxjbkXi176AnU= X-Received: by 2002:a9d:3a6:: with SMTP id f35mr263028otf.363.1568655981118; Mon, 16 Sep 2019 10:46:21 -0700 (PDT) MIME-Version: 1.0 References: <20190910062826.10041-1-aneesh.kumar@linux.ibm.com> <20190910062826.10041-2-aneesh.kumar@linux.ibm.com> In-Reply-To: <20190910062826.10041-2-aneesh.kumar@linux.ibm.com> From: Dan Williams Date: Mon, 16 Sep 2019 10:46:10 -0700 Message-ID: Subject: Re: [PATCH 2/2] powerpc/nvdimm: Update vmemmap_populated to check sub-section range To: "Aneesh Kumar K.V" X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michael Ellerman , linuxppc-dev , linux-nvdimm Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" On Mon, Sep 9, 2019 at 11:29 PM Aneesh Kumar K.V wrote: > > With commit: 7cc7867fb061 ("mm/devm_memremap_pages: enable sub-section remap") > pmem namespaces are remapped in 2M chunks. On architectures like ppc64 we > can map the memmap area using 16MB hugepage size and that can cover > a memory range of 16G. > > While enabling new pmem namespaces, since memory is added in sub-section chunks, > before creating a new memmap mapping, kernel should check whether there is an > existing memmap mapping covering the new pmem namespace. Currently, this is > validated by checking whether the section covering the range is already > initialized or not. Considering there can be multiple namespaces in the same > section this can result in wrong validation. Update this to check for > sub-sections in the range. This is done by checking for all pfns in the range we > are mapping. > > We could optimize this by checking only just one pfn in each sub-section. But > since this is not fast-path we keep this simple. > > Signed-off-by: Aneesh Kumar K.V > --- > arch/powerpc/mm/init_64.c | 45 ++++++++++++++++++++------------------- > 1 file changed, 23 insertions(+), 22 deletions(-) > > diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c > index 4e08246acd79..7710ccdc19a2 100644 > --- a/arch/powerpc/mm/init_64.c > +++ b/arch/powerpc/mm/init_64.c > @@ -70,30 +70,24 @@ EXPORT_SYMBOL_GPL(kernstart_addr); > > #ifdef CONFIG_SPARSEMEM_VMEMMAP > /* > - * Given an address within the vmemmap, determine the pfn of the page that > - * represents the start of the section it is within. Note that we have to > - * do this by hand as the proffered address may not be correctly aligned. > - * Subtraction of non-aligned pointers produces undefined results. > - */ > -static unsigned long __meminit vmemmap_section_start(unsigned long page) > -{ > - unsigned long offset = page - ((unsigned long)(vmemmap)); > - > - /* Return the pfn of the start of the section. */ > - return (offset / sizeof(struct page)) & PAGE_SECTION_MASK; > -} > - > -/* > - * Check if this vmemmap page is already initialised. If any section > + * Check if this vmemmap page is already initialised. If any sub section > * which overlaps this vmemmap page is initialised then this page is > * initialised already. > */ > -static int __meminit vmemmap_populated(unsigned long start, int page_size) > + > +static int __meminit vmemmap_populated(unsigned long start, int size) > { > - unsigned long end = start + page_size; > - start = (unsigned long)(pfn_to_page(vmemmap_section_start(start))); > + unsigned long end = start + size; > > - for (; start < end; start += (PAGES_PER_SECTION * sizeof(struct page))) > + /* start is size aligned and it is always > sizeof(struct page) */ > + VM_BUG_ON(start & sizeof(struct page)); If start is size aligned why not include that assumption in the VM_BUG_ON()? Otherwise it seems this patch could be reduced simply by: s/PAGE_SECTION_MASK/PAGE_SUBSECTION_MASK/ s/PAGES_PER_SECTION/PAGES_PER_SUBSECTION/ ...and leave the vmemmap_section_start() function in place? In other words this path used to guarantee that 'start' was aligned to the minimum mem-hotplug granularity, the change looks ok on the surface, but it seems a subtle change in semantics. Can you get an ack from a powerpc maintainer, or maybe this patch should route through the powerpc tree? I'll take patch1 through the nvdimm tree. _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1C7DC49ED7 for ; Mon, 16 Sep 2019 17:48:32 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C29AA214D9 for ; Mon, 16 Sep 2019 17:48:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="sGHgEALC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C29AA214D9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 46XDHc712CzDrPB for ; Tue, 17 Sep 2019 03:48:28 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=intel.com (client-ip=2607:f8b0:4864:20::343; helo=mail-ot1-x343.google.com; envelope-from=dan.j.williams@intel.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="sGHgEALC"; dkim-atps=neutral Received: from mail-ot1-x343.google.com (mail-ot1-x343.google.com [IPv6:2607:f8b0:4864:20::343]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 46XDFM6NvLzF3FL for ; Tue, 17 Sep 2019 03:46:24 +1000 (AEST) Received: by mail-ot1-x343.google.com with SMTP id z26so648843oto.1 for ; Mon, 16 Sep 2019 10:46:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QMOOvJcCOxvJGsQgO09ry9PIYgCnyHRaKHcmKjPMZDE=; b=sGHgEALCuEWB5ciy5vD0LBEP7OLBzMNFaPlQpilKVRzFeMz5JJHbuFQLj4PpNzUxPp FPdyG5nmqrAHieCHI3kNe4+51PPvluEc2panvSJRp/Vr78fXeOMN9J9AO8MNb2YDHmNY /B/x8b6XVp+VTZMNTlc4LHA6oWdGlrcPKrTrl0BLRV0W+5L1oFNWOqLJX5YXtM0en7/r orWUlsrnBAdgXUkqvm50ASXBbVpsUCnJMXSFFyNz3g675XJaH9ifvjpIjcc7Zpnm2tHU AHRzvUC/EUHEfUhb6CDI+p2UWpvUDtd/dXlfR4skq7vfp6oCyOXgqn9GU1VDcWJKaHL5 3piQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QMOOvJcCOxvJGsQgO09ry9PIYgCnyHRaKHcmKjPMZDE=; b=hlaNq1bHE2Z+HJS+XnibNGu0tcgQT44lzECjmk12bao2krCStfT3V8YJWsijOWRXhJ 7Dd9XSbjHtqi+MwHfgr8SVWGDBkAFnxpHIrLOJntJsNvrETmilT6j0YEh9S+AJ+zfY6i z+fq/VkQsH/2i+2ef6fZabMUdNp+y1FFsUVFuRVPF6Jc8n/dyt2vSXd2bwcAMRGh7wNK IrfYziZHjQKjNy5+KLl7juKuy+wHa+8Kj7WHIe9wUDT6B8kC7AxAOnFZ0ehcSNqcgWve zy2Y+iHvgiTrgo/c9DnvIyL9FU7yPigRdQGVUgHRHWQDRo+LJfAL2Nyp6VUZPhrONruK 5Isw== X-Gm-Message-State: APjAAAVZhXwACPv1OTAfqn/2b3DtQD2ogYa8K/wpGPaUZTB/rc0AJQPA DNrP1P2QTOLfLSk/0QSyEpothzjhHVG6NgEUvqP+HQ== X-Google-Smtp-Source: APXvYqz8wwo7dB4Y3wRdLSUJkJjWQZmRljho5XvoU+unxbG7cxvDpcHYJWPfu6CFzQU4tdjicuiHDUtxjbkXi176AnU= X-Received: by 2002:a9d:3a6:: with SMTP id f35mr263028otf.363.1568655981118; Mon, 16 Sep 2019 10:46:21 -0700 (PDT) MIME-Version: 1.0 References: <20190910062826.10041-1-aneesh.kumar@linux.ibm.com> <20190910062826.10041-2-aneesh.kumar@linux.ibm.com> In-Reply-To: <20190910062826.10041-2-aneesh.kumar@linux.ibm.com> From: Dan Williams Date: Mon, 16 Sep 2019 10:46:10 -0700 Message-ID: Subject: Re: [PATCH 2/2] powerpc/nvdimm: Update vmemmap_populated to check sub-section range To: "Aneesh Kumar K.V" Content-Type: text/plain; charset="UTF-8" X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran , linuxppc-dev , linux-nvdimm Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Mon, Sep 9, 2019 at 11:29 PM Aneesh Kumar K.V wrote: > > With commit: 7cc7867fb061 ("mm/devm_memremap_pages: enable sub-section remap") > pmem namespaces are remapped in 2M chunks. On architectures like ppc64 we > can map the memmap area using 16MB hugepage size and that can cover > a memory range of 16G. > > While enabling new pmem namespaces, since memory is added in sub-section chunks, > before creating a new memmap mapping, kernel should check whether there is an > existing memmap mapping covering the new pmem namespace. Currently, this is > validated by checking whether the section covering the range is already > initialized or not. Considering there can be multiple namespaces in the same > section this can result in wrong validation. Update this to check for > sub-sections in the range. This is done by checking for all pfns in the range we > are mapping. > > We could optimize this by checking only just one pfn in each sub-section. But > since this is not fast-path we keep this simple. > > Signed-off-by: Aneesh Kumar K.V > --- > arch/powerpc/mm/init_64.c | 45 ++++++++++++++++++++------------------- > 1 file changed, 23 insertions(+), 22 deletions(-) > > diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c > index 4e08246acd79..7710ccdc19a2 100644 > --- a/arch/powerpc/mm/init_64.c > +++ b/arch/powerpc/mm/init_64.c > @@ -70,30 +70,24 @@ EXPORT_SYMBOL_GPL(kernstart_addr); > > #ifdef CONFIG_SPARSEMEM_VMEMMAP > /* > - * Given an address within the vmemmap, determine the pfn of the page that > - * represents the start of the section it is within. Note that we have to > - * do this by hand as the proffered address may not be correctly aligned. > - * Subtraction of non-aligned pointers produces undefined results. > - */ > -static unsigned long __meminit vmemmap_section_start(unsigned long page) > -{ > - unsigned long offset = page - ((unsigned long)(vmemmap)); > - > - /* Return the pfn of the start of the section. */ > - return (offset / sizeof(struct page)) & PAGE_SECTION_MASK; > -} > - > -/* > - * Check if this vmemmap page is already initialised. If any section > + * Check if this vmemmap page is already initialised. If any sub section > * which overlaps this vmemmap page is initialised then this page is > * initialised already. > */ > -static int __meminit vmemmap_populated(unsigned long start, int page_size) > + > +static int __meminit vmemmap_populated(unsigned long start, int size) > { > - unsigned long end = start + page_size; > - start = (unsigned long)(pfn_to_page(vmemmap_section_start(start))); > + unsigned long end = start + size; > > - for (; start < end; start += (PAGES_PER_SECTION * sizeof(struct page))) > + /* start is size aligned and it is always > sizeof(struct page) */ > + VM_BUG_ON(start & sizeof(struct page)); If start is size aligned why not include that assumption in the VM_BUG_ON()? Otherwise it seems this patch could be reduced simply by: s/PAGE_SECTION_MASK/PAGE_SUBSECTION_MASK/ s/PAGES_PER_SECTION/PAGES_PER_SUBSECTION/ ...and leave the vmemmap_section_start() function in place? In other words this path used to guarantee that 'start' was aligned to the minimum mem-hotplug granularity, the change looks ok on the surface, but it seems a subtle change in semantics. Can you get an ack from a powerpc maintainer, or maybe this patch should route through the powerpc tree? I'll take patch1 through the nvdimm tree.