All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Feiyang Chen <chenfeiyang@loongson.cn>,
	Alistair Popple <apopple@nvidia.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	<linux-arm-kernel@lists.infradead.org>,
	LKML <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	John Hubbard <jhubbard@nvidia.com>, <stable@vger.kernel.org>
Subject: [PATCH] arm64/mm: don't WARN when alloc/free-ing device private pages
Date: Wed, 5 Apr 2023 21:05:15 -0700	[thread overview]
Message-ID: <20230406040515.383238-1-jhubbard@nvidia.com> (raw)

Although CONFIG_DEVICE_PRIVATE and hmm_range_fault() and related
functionality was first developed on x86, it also works on arm64.
However, when trying this out on an arm64 system, it turns out that
there is a massive slowdown during the setup and teardown phases.

This slowdown is due to lots of calls to WARN_ON()'s that are checking
for pages that are out of the physical range for the CPU. However,
that's a design feature of device private pages: they are specfically
chosen in order to be outside of the range of the CPU's true physical
pages.

x86 doesn't have this warning. It only checks that pages are properly
aligned. I've shown a comparison below between x86 (which works well)
and arm64 (which has these warnings).

memunmap_pages()
  pageunmap_range()
    if (pgmap->type == MEMORY_DEVICE_PRIVATE)
      __remove_pages()
        __remove_section()
          sparse_remove_section()
            section_deactivate()
              depopulate_section_memmap()
                /* arch/arm64/mm/mmu.c */
                vmemmap_free()
                {
                  WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
                  ...
                }

                /* arch/x86/mm/init_64.c */
                vmemmap_free()
                {
                  VM_BUG_ON(!PAGE_ALIGNED(start));
                  VM_BUG_ON(!PAGE_ALIGNED(end));
                  ...
                }

So, the warning is a false positive for this case. Therefore, skip the
warning if CONFIG_DEVICE_PRIVATE is set.

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
cc: <stable@vger.kernel.org>
---
 arch/arm64/mm/mmu.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6f9d8898a025..d5c9b611a8d1 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1157,8 +1157,10 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
 int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 		struct vmem_altmap *altmap)
 {
+/* Device private pages are outside of the CPU's physical page range. */
+#ifndef CONFIG_DEVICE_PRIVATE
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
-
+#endif
 	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
 		return vmemmap_populate_basepages(start, end, node, altmap);
 	else
@@ -1169,8 +1171,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 void vmemmap_free(unsigned long start, unsigned long end,
 		struct vmem_altmap *altmap)
 {
+/* Device private pages are outside of the CPU's physical page range. */
+#ifndef CONFIG_DEVICE_PRIVATE
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
-
+#endif
 	unmap_hotplug_range(start, end, true, altmap);
 	free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END);
 }
-- 
2.40.0


WARNING: multiple messages have this Message-ID (diff)
From: John Hubbard <jhubbard@nvidia.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Feiyang Chen <chenfeiyang@loongson.cn>,
	Alistair Popple <apopple@nvidia.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	<linux-arm-kernel@lists.infradead.org>,
	LKML <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	John Hubbard <jhubbard@nvidia.com>, <stable@vger.kernel.org>
Subject: [PATCH] arm64/mm: don't WARN when alloc/free-ing device private pages
Date: Wed, 5 Apr 2023 21:05:15 -0700	[thread overview]
Message-ID: <20230406040515.383238-1-jhubbard@nvidia.com> (raw)

Although CONFIG_DEVICE_PRIVATE and hmm_range_fault() and related
functionality was first developed on x86, it also works on arm64.
However, when trying this out on an arm64 system, it turns out that
there is a massive slowdown during the setup and teardown phases.

This slowdown is due to lots of calls to WARN_ON()'s that are checking
for pages that are out of the physical range for the CPU. However,
that's a design feature of device private pages: they are specfically
chosen in order to be outside of the range of the CPU's true physical
pages.

x86 doesn't have this warning. It only checks that pages are properly
aligned. I've shown a comparison below between x86 (which works well)
and arm64 (which has these warnings).

memunmap_pages()
  pageunmap_range()
    if (pgmap->type == MEMORY_DEVICE_PRIVATE)
      __remove_pages()
        __remove_section()
          sparse_remove_section()
            section_deactivate()
              depopulate_section_memmap()
                /* arch/arm64/mm/mmu.c */
                vmemmap_free()
                {
                  WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
                  ...
                }

                /* arch/x86/mm/init_64.c */
                vmemmap_free()
                {
                  VM_BUG_ON(!PAGE_ALIGNED(start));
                  VM_BUG_ON(!PAGE_ALIGNED(end));
                  ...
                }

So, the warning is a false positive for this case. Therefore, skip the
warning if CONFIG_DEVICE_PRIVATE is set.

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
cc: <stable@vger.kernel.org>
---
 arch/arm64/mm/mmu.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6f9d8898a025..d5c9b611a8d1 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1157,8 +1157,10 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
 int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 		struct vmem_altmap *altmap)
 {
+/* Device private pages are outside of the CPU's physical page range. */
+#ifndef CONFIG_DEVICE_PRIVATE
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
-
+#endif
 	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
 		return vmemmap_populate_basepages(start, end, node, altmap);
 	else
@@ -1169,8 +1171,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 void vmemmap_free(unsigned long start, unsigned long end,
 		struct vmem_altmap *altmap)
 {
+/* Device private pages are outside of the CPU's physical page range. */
+#ifndef CONFIG_DEVICE_PRIVATE
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
-
+#endif
 	unmap_hotplug_range(start, end, true, altmap);
 	free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END);
 }
-- 
2.40.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

             reply	other threads:[~2023-04-06  4:05 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-06  4:05 John Hubbard [this message]
2023-04-06  4:05 ` [PATCH] arm64/mm: don't WARN when alloc/free-ing device private pages John Hubbard
2023-04-06  7:31 ` Ard Biesheuvel
2023-04-06  7:31   ` Ard Biesheuvel
2023-04-07  0:13   ` John Hubbard
2023-04-07  0:13     ` John Hubbard
2023-04-07 10:45     ` Ard Biesheuvel
2023-04-07 10:45       ` Ard Biesheuvel
2023-04-10  7:39       ` John Hubbard
2023-04-10  7:39         ` John Hubbard
2023-04-11  2:48         ` John Hubbard
2023-04-11  2:48           ` John Hubbard
2023-05-12 14:42           ` Ard Biesheuvel
2023-05-13  2:06             ` John Hubbard
2023-04-06 20:07 ` Andrew Morton
2023-04-06 20:07   ` Andrew Morton
2023-04-06 20:18   ` John Hubbard
2023-04-06 20:18     ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230406040515.383238-1-jhubbard@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=apopple@nvidia.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=chenfeiyang@loongson.cn \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=rcampbell@nvidia.com \
    --cc=stable@vger.kernel.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.