All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] mm/hotplug: Pre-validate the address range with platform
@ 2020-12-08  4:16 ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: linux-arm-kernel, linux-s390, linux-kernel, Anshuman Khandual,
	Vasily Gorbik, Will Deacon, Ard Biesheuvel, Mark Rutland

This series adds a mechanism allowing platforms to weigh in and prevalidate
incoming address range before proceeding further with the memory hotplug.
This helps prevent potential platform errors for the given address range,
down the hotplug call chain, which inevitably fails the hotplug itself.

This mechanism was suggested by David Hildenbrand during another discussion
with respect to a memory hotplug fix on arm64 platform.

https://lore.kernel.org/linux-arm-kernel/1600332402-30123-1-git-send-email-anshuman.khandual@arm.com/

This mechanism focuses on the addressibility aspect and not [sub] section
alignment aspect. Hence check_hotplug_memory_range() and check_pfn_span()
have been left unchanged. Wondering if all these can still be unified in
an expanded memhp_range_allowed() check, that can be called from multiple
memory hot add and remove paths.

This series applies on v5.10-rc7 and has been tested on arm64. But only
build tested on s390.

Changes in V1:

- Fixed build problems with (MEMORY_HOTPLUG & !MEMORY_HOTREMOVE)
- Added missing prototype for arch_get_mappable_range()
- Added VM_BUG_ON() check for memhp_range_allowed() in arch_add_memory() per David

Changes in RFC V2:

https://lore.kernel.org/linux-mm/1606706992-26656-1-git-send-email-anshuman.khandual@arm.com/

Incorporated all review feedbacks from David.

- Added additional range check in __segment_load() on s390 which was lost
- Changed is_private init in pagemap_range()
- Moved the framework into mm/memory_hotplug.c
- Made arch_get_addressable_range() a __weak function
- Renamed arch_get_addressable_range() as arch_get_mappable_range()
- Callback arch_get_mappable_range() only handles range requiring linear mapping
- Merged multiple memhp_range_allowed() checks in register_memory_resource()
- Replaced WARN() with pr_warn() in memhp_range_allowed()
- Replaced error return code ERANGE with E2BIG

Changes in RFC V1:

https://lore.kernel.org/linux-mm/1606098529-7907-1-git-send-email-anshuman.khandual@arm.com/

Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-s390@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org

Anshuman Khandual (3):
  mm/hotplug: Prevalidate the address range being added with platform
  arm64/mm: Define arch_get_mappable_range()
  s390/mm: Define arch_get_mappable_range()

 arch/arm64/mm/mmu.c            | 15 +++----
 arch/s390/mm/extmem.c          |  5 +++
 arch/s390/mm/init.c            | 10 +++++
 arch/s390/mm/vmem.c            |  4 --
 include/linux/memory_hotplug.h |  3 ++
 mm/memory_hotplug.c            | 78 +++++++++++++++++++++++++---------
 mm/memremap.c                  |  6 ++-
 7 files changed, 88 insertions(+), 33 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 0/3] mm/hotplug: Pre-validate the address range with platform
@ 2020-12-08  4:16 ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, Anshuman Khandual,
	linux-kernel, Will Deacon, Ard Biesheuvel, linux-arm-kernel

This series adds a mechanism allowing platforms to weigh in and prevalidate
incoming address range before proceeding further with the memory hotplug.
This helps prevent potential platform errors for the given address range,
down the hotplug call chain, which inevitably fails the hotplug itself.

This mechanism was suggested by David Hildenbrand during another discussion
with respect to a memory hotplug fix on arm64 platform.

https://lore.kernel.org/linux-arm-kernel/1600332402-30123-1-git-send-email-anshuman.khandual@arm.com/

This mechanism focuses on the addressibility aspect and not [sub] section
alignment aspect. Hence check_hotplug_memory_range() and check_pfn_span()
have been left unchanged. Wondering if all these can still be unified in
an expanded memhp_range_allowed() check, that can be called from multiple
memory hot add and remove paths.

This series applies on v5.10-rc7 and has been tested on arm64. But only
build tested on s390.

Changes in V1:

- Fixed build problems with (MEMORY_HOTPLUG & !MEMORY_HOTREMOVE)
- Added missing prototype for arch_get_mappable_range()
- Added VM_BUG_ON() check for memhp_range_allowed() in arch_add_memory() per David

Changes in RFC V2:

https://lore.kernel.org/linux-mm/1606706992-26656-1-git-send-email-anshuman.khandual@arm.com/

Incorporated all review feedbacks from David.

- Added additional range check in __segment_load() on s390 which was lost
- Changed is_private init in pagemap_range()
- Moved the framework into mm/memory_hotplug.c
- Made arch_get_addressable_range() a __weak function
- Renamed arch_get_addressable_range() as arch_get_mappable_range()
- Callback arch_get_mappable_range() only handles range requiring linear mapping
- Merged multiple memhp_range_allowed() checks in register_memory_resource()
- Replaced WARN() with pr_warn() in memhp_range_allowed()
- Replaced error return code ERANGE with E2BIG

Changes in RFC V1:

https://lore.kernel.org/linux-mm/1606098529-7907-1-git-send-email-anshuman.khandual@arm.com/

Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-s390@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org

Anshuman Khandual (3):
  mm/hotplug: Prevalidate the address range being added with platform
  arm64/mm: Define arch_get_mappable_range()
  s390/mm: Define arch_get_mappable_range()

 arch/arm64/mm/mmu.c            | 15 +++----
 arch/s390/mm/extmem.c          |  5 +++
 arch/s390/mm/init.c            | 10 +++++
 arch/s390/mm/vmem.c            |  4 --
 include/linux/memory_hotplug.h |  3 ++
 mm/memory_hotplug.c            | 78 +++++++++++++++++++++++++---------
 mm/memremap.c                  |  6 ++-
 7 files changed, 88 insertions(+), 33 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] mm/hotplug: Prevalidate the address range being added with platform
  2020-12-08  4:16 ` Anshuman Khandual
@ 2020-12-08  4:16   ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: linux-arm-kernel, linux-s390, linux-kernel, Anshuman Khandual,
	Vasily Gorbik, Will Deacon, Ard Biesheuvel, Mark Rutland

This introduces memhp_range_allowed() which can be called in various memory
hotplug paths to prevalidate the address range which is being added, with
the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range()
which provides applicable address range depending on whether linear mapping
is required or not. For ranges that require linear mapping, it calls a new
arch callback arch_get_mappable_range() which the platform can override. So
the new callback, in turn provides the platform an opportunity to configure
acceptable memory hotplug address ranges in case there are constraints.

This mechanism will help prevent platform specific errors deep down during
hotplug calls. This drops now redundant check_hotplug_memory_addressable()
check in __add_pages() but instead adds a VM_BUG_ON() check which would
ensure that the range has been validated with memhp_range_allowed() earlier
in the call chain.

Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 include/linux/memory_hotplug.h |  3 ++
 mm/memory_hotplug.c            | 78 +++++++++++++++++++++++++---------
 mm/memremap.c                  |  6 ++-
 3 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 551093b74596..efa3202524c0 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -70,6 +70,9 @@ typedef int __bitwise mhp_t;
  */
 #define MEMHP_MERGE_RESOURCE	((__force mhp_t)BIT(0))
 
+bool memhp_range_allowed(u64 start, u64 size, bool need_mapping);
+struct range arch_get_mappable_range(void);
+
 /*
  * Extended parameters for memory hotplug:
  * altmap: alternative allocator for memmap array (optional)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 63b2e46b6555..797fa7f506b0 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -107,6 +107,9 @@ static struct resource *register_memory_resource(u64 start, u64 size,
 	if (strcmp(resource_name, "System RAM"))
 		flags |= IORESOURCE_SYSRAM_DRIVER_MANAGED;
 
+	if (!memhp_range_allowed(start, size, 1))
+		return ERR_PTR(-E2BIG);
+
 	/*
 	 * Make sure value parsed from 'mem=' only restricts memory adding
 	 * while booting, so that memory hotplug won't be impacted. Please
@@ -284,22 +287,6 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 	return 0;
 }
 
-static int check_hotplug_memory_addressable(unsigned long pfn,
-					    unsigned long nr_pages)
-{
-	const u64 max_addr = PFN_PHYS(pfn + nr_pages) - 1;
-
-	if (max_addr >> MAX_PHYSMEM_BITS) {
-		const u64 max_allowed = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1;
-		WARN(1,
-		     "Hotplugged memory exceeds maximum addressable address, range=%#llx-%#llx, maximum=%#llx\n",
-		     (u64)PFN_PHYS(pfn), max_addr, max_allowed);
-		return -E2BIG;
-	}
-
-	return 0;
-}
-
 /*
  * Reasonably generic function for adding memory.  It is
  * expected that archs that support memory hotplug will
@@ -317,10 +304,7 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
 	if (WARN_ON_ONCE(!params->pgprot.pgprot))
 		return -EINVAL;
 
-	err = check_hotplug_memory_addressable(pfn, nr_pages);
-	if (err)
-		return err;
-
+	VM_BUG_ON(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, 1));
 	if (altmap) {
 		/*
 		 * Validate altmap is within bounds of the total request
@@ -1181,6 +1165,60 @@ int add_memory_driver_managed(int nid, u64 start, u64 size,
 }
 EXPORT_SYMBOL_GPL(add_memory_driver_managed);
 
+/*
+ * Platforms should define arch_get_mappable_range() that provides
+ * maximum possible addressable physical memory range for which the
+ * linear mapping could be created. The platform returned address
+ * range must adhere to these following semantics.
+ *
+ * - range.start <= range.end
+ * - Range includes both end points [range.start..range.end]
+ *
+ * There is also a fallback definition provided here, allowing the
+ * entire possible physical address range in case any platform does
+ * not define arch_get_mappable_range().
+ */
+struct range __weak arch_get_mappable_range(void)
+{
+	struct range memhp_range = {
+		.start = 0UL,
+		.end = -1ULL,
+	};
+	return memhp_range;
+}
+
+static inline struct range memhp_get_pluggable_range(bool need_mapping)
+{
+	const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1;
+	struct range memhp_range;
+
+	if (need_mapping) {
+		memhp_range = arch_get_mappable_range();
+		if (memhp_range.start > max_phys) {
+			memhp_range.start = 0;
+			memhp_range.end = 0;
+		}
+		memhp_range.end = min_t(u64, memhp_range.end, max_phys);
+	} else {
+		memhp_range.start = 0;
+		memhp_range.end = max_phys;
+	}
+	return memhp_range;
+}
+
+bool memhp_range_allowed(u64 start, u64 size, bool need_mapping)
+{
+	struct range memhp_range = memhp_get_pluggable_range(need_mapping);
+	u64 end = start + size;
+
+	if (start < end && start >= memhp_range.start && (end - 1) <= memhp_range.end)
+		return true;
+
+	pr_warn("Hotplug memory [%#llx-%#llx] exceeds maximum addressable range [%#llx-%#llx]\n",
+		start, end, memhp_range.start, memhp_range.end);
+	return false;
+}
+
 #ifdef CONFIG_MEMORY_HOTREMOVE
 /*
  * Confirm all pages in a range [start, end) belong to the same zone (skipping
diff --git a/mm/memremap.c b/mm/memremap.c
index 16b2fb482da1..26c1825756cc 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -185,6 +185,7 @@ static void dev_pagemap_percpu_release(struct percpu_ref *ref)
 static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 		int range_id, int nid)
 {
+	const bool is_private = pgmap->type == MEMORY_DEVICE_PRIVATE;
 	struct range *range = &pgmap->ranges[range_id];
 	struct dev_pagemap *conflict_pgmap;
 	int error, is_ram;
@@ -230,6 +231,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 	if (error)
 		goto err_pfn_remap;
 
+	if (!memhp_range_allowed(range->start, range_len(range), !is_private))
+		goto err_pfn_remap;
+
 	mem_hotplug_begin();
 
 	/*
@@ -243,7 +247,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 	 * the CPU, we do want the linear mapping and thus use
 	 * arch_add_memory().
 	 */
-	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
+	if (is_private) {
 		error = add_pages(nid, PHYS_PFN(range->start),
 				PHYS_PFN(range_len(range)), params);
 	} else {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 1/3] mm/hotplug: Prevalidate the address range being added with platform
@ 2020-12-08  4:16   ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, Anshuman Khandual,
	linux-kernel, Will Deacon, Ard Biesheuvel, linux-arm-kernel

This introduces memhp_range_allowed() which can be called in various memory
hotplug paths to prevalidate the address range which is being added, with
the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range()
which provides applicable address range depending on whether linear mapping
is required or not. For ranges that require linear mapping, it calls a new
arch callback arch_get_mappable_range() which the platform can override. So
the new callback, in turn provides the platform an opportunity to configure
acceptable memory hotplug address ranges in case there are constraints.

This mechanism will help prevent platform specific errors deep down during
hotplug calls. This drops now redundant check_hotplug_memory_addressable()
check in __add_pages() but instead adds a VM_BUG_ON() check which would
ensure that the range has been validated with memhp_range_allowed() earlier
in the call chain.

Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 include/linux/memory_hotplug.h |  3 ++
 mm/memory_hotplug.c            | 78 +++++++++++++++++++++++++---------
 mm/memremap.c                  |  6 ++-
 3 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 551093b74596..efa3202524c0 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -70,6 +70,9 @@ typedef int __bitwise mhp_t;
  */
 #define MEMHP_MERGE_RESOURCE	((__force mhp_t)BIT(0))
 
+bool memhp_range_allowed(u64 start, u64 size, bool need_mapping);
+struct range arch_get_mappable_range(void);
+
 /*
  * Extended parameters for memory hotplug:
  * altmap: alternative allocator for memmap array (optional)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 63b2e46b6555..797fa7f506b0 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -107,6 +107,9 @@ static struct resource *register_memory_resource(u64 start, u64 size,
 	if (strcmp(resource_name, "System RAM"))
 		flags |= IORESOURCE_SYSRAM_DRIVER_MANAGED;
 
+	if (!memhp_range_allowed(start, size, 1))
+		return ERR_PTR(-E2BIG);
+
 	/*
 	 * Make sure value parsed from 'mem=' only restricts memory adding
 	 * while booting, so that memory hotplug won't be impacted. Please
@@ -284,22 +287,6 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 	return 0;
 }
 
-static int check_hotplug_memory_addressable(unsigned long pfn,
-					    unsigned long nr_pages)
-{
-	const u64 max_addr = PFN_PHYS(pfn + nr_pages) - 1;
-
-	if (max_addr >> MAX_PHYSMEM_BITS) {
-		const u64 max_allowed = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1;
-		WARN(1,
-		     "Hotplugged memory exceeds maximum addressable address, range=%#llx-%#llx, maximum=%#llx\n",
-		     (u64)PFN_PHYS(pfn), max_addr, max_allowed);
-		return -E2BIG;
-	}
-
-	return 0;
-}
-
 /*
  * Reasonably generic function for adding memory.  It is
  * expected that archs that support memory hotplug will
@@ -317,10 +304,7 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages,
 	if (WARN_ON_ONCE(!params->pgprot.pgprot))
 		return -EINVAL;
 
-	err = check_hotplug_memory_addressable(pfn, nr_pages);
-	if (err)
-		return err;
-
+	VM_BUG_ON(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, 1));
 	if (altmap) {
 		/*
 		 * Validate altmap is within bounds of the total request
@@ -1181,6 +1165,60 @@ int add_memory_driver_managed(int nid, u64 start, u64 size,
 }
 EXPORT_SYMBOL_GPL(add_memory_driver_managed);
 
+/*
+ * Platforms should define arch_get_mappable_range() that provides
+ * maximum possible addressable physical memory range for which the
+ * linear mapping could be created. The platform returned address
+ * range must adhere to these following semantics.
+ *
+ * - range.start <= range.end
+ * - Range includes both end points [range.start..range.end]
+ *
+ * There is also a fallback definition provided here, allowing the
+ * entire possible physical address range in case any platform does
+ * not define arch_get_mappable_range().
+ */
+struct range __weak arch_get_mappable_range(void)
+{
+	struct range memhp_range = {
+		.start = 0UL,
+		.end = -1ULL,
+	};
+	return memhp_range;
+}
+
+static inline struct range memhp_get_pluggable_range(bool need_mapping)
+{
+	const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1;
+	struct range memhp_range;
+
+	if (need_mapping) {
+		memhp_range = arch_get_mappable_range();
+		if (memhp_range.start > max_phys) {
+			memhp_range.start = 0;
+			memhp_range.end = 0;
+		}
+		memhp_range.end = min_t(u64, memhp_range.end, max_phys);
+	} else {
+		memhp_range.start = 0;
+		memhp_range.end = max_phys;
+	}
+	return memhp_range;
+}
+
+bool memhp_range_allowed(u64 start, u64 size, bool need_mapping)
+{
+	struct range memhp_range = memhp_get_pluggable_range(need_mapping);
+	u64 end = start + size;
+
+	if (start < end && start >= memhp_range.start && (end - 1) <= memhp_range.end)
+		return true;
+
+	pr_warn("Hotplug memory [%#llx-%#llx] exceeds maximum addressable range [%#llx-%#llx]\n",
+		start, end, memhp_range.start, memhp_range.end);
+	return false;
+}
+
 #ifdef CONFIG_MEMORY_HOTREMOVE
 /*
  * Confirm all pages in a range [start, end) belong to the same zone (skipping
diff --git a/mm/memremap.c b/mm/memremap.c
index 16b2fb482da1..26c1825756cc 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -185,6 +185,7 @@ static void dev_pagemap_percpu_release(struct percpu_ref *ref)
 static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 		int range_id, int nid)
 {
+	const bool is_private = pgmap->type == MEMORY_DEVICE_PRIVATE;
 	struct range *range = &pgmap->ranges[range_id];
 	struct dev_pagemap *conflict_pgmap;
 	int error, is_ram;
@@ -230,6 +231,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 	if (error)
 		goto err_pfn_remap;
 
+	if (!memhp_range_allowed(range->start, range_len(range), !is_private))
+		goto err_pfn_remap;
+
 	mem_hotplug_begin();
 
 	/*
@@ -243,7 +247,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 	 * the CPU, we do want the linear mapping and thus use
 	 * arch_add_memory().
 	 */
-	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
+	if (is_private) {
 		error = add_pages(nid, PHYS_PFN(range->start),
 				PHYS_PFN(range_len(range)), params);
 	} else {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] arm64/mm: Define arch_get_mappable_range()
  2020-12-08  4:16 ` Anshuman Khandual
@ 2020-12-08  4:16   ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: linux-arm-kernel, linux-s390, linux-kernel, Anshuman Khandual,
	Vasily Gorbik, Will Deacon, Ard Biesheuvel, Mark Rutland

This overrides arch_get_mappable_range() on arm64 platform which will be
used with recently added generic framework. It drops inside_linear_region()
and subsequent check in arch_add_memory() which are no longer required. It
also adds a VM_BUG_ON() check that would ensure that memhp_range_allowed()
has already been called.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/mm/mmu.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ca692a815731..54705dcacb4e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1444,16 +1444,19 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
 	free_empty_tables(start, end, PAGE_OFFSET, PAGE_END);
 }
 
-static bool inside_linear_region(u64 start, u64 size)
+struct range arch_get_mappable_range(void)
 {
+	struct range memhp_range;
+
 	/*
 	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
 	 * accommodating both its ends but excluding PAGE_END. Max physical
 	 * range which can be mapped inside this linear mapping range, must
 	 * also be derived from its end points.
 	 */
-	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
-	       (start + size - 1) <= __pa(PAGE_END - 1);
+	memhp_range.start = __pa(_PAGE_OFFSET(vabits_actual));
+	memhp_range.end =  __pa(PAGE_END - 1);
+	return memhp_range;
 }
 
 int arch_add_memory(int nid, u64 start, u64 size,
@@ -1461,11 +1464,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 {
 	int ret, flags = 0;
 
-	if (!inside_linear_region(start, size)) {
-		pr_err("[%llx %llx] is outside linear mapping region\n", start, start + size);
-		return -EINVAL;
-	}
-
+	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
 	if (rodata_full || debug_pagealloc_enabled())
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] arm64/mm: Define arch_get_mappable_range()
@ 2020-12-08  4:16   ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, Anshuman Khandual,
	linux-kernel, Will Deacon, Ard Biesheuvel, linux-arm-kernel

This overrides arch_get_mappable_range() on arm64 platform which will be
used with recently added generic framework. It drops inside_linear_region()
and subsequent check in arch_add_memory() which are no longer required. It
also adds a VM_BUG_ON() check that would ensure that memhp_range_allowed()
has already been called.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/mm/mmu.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ca692a815731..54705dcacb4e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1444,16 +1444,19 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
 	free_empty_tables(start, end, PAGE_OFFSET, PAGE_END);
 }
 
-static bool inside_linear_region(u64 start, u64 size)
+struct range arch_get_mappable_range(void)
 {
+	struct range memhp_range;
+
 	/*
 	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
 	 * accommodating both its ends but excluding PAGE_END. Max physical
 	 * range which can be mapped inside this linear mapping range, must
 	 * also be derived from its end points.
 	 */
-	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
-	       (start + size - 1) <= __pa(PAGE_END - 1);
+	memhp_range.start = __pa(_PAGE_OFFSET(vabits_actual));
+	memhp_range.end =  __pa(PAGE_END - 1);
+	return memhp_range;
 }
 
 int arch_add_memory(int nid, u64 start, u64 size,
@@ -1461,11 +1464,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 {
 	int ret, flags = 0;
 
-	if (!inside_linear_region(start, size)) {
-		pr_err("[%llx %llx] is outside linear mapping region\n", start, start + size);
-		return -EINVAL;
-	}
-
+	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
 	if (rodata_full || debug_pagealloc_enabled())
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-08  4:16 ` Anshuman Khandual
@ 2020-12-08  4:16   ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: linux-arm-kernel, linux-s390, linux-kernel, Anshuman Khandual,
	Vasily Gorbik, Will Deacon, Ard Biesheuvel, Mark Rutland

This overrides arch_get_mappabble_range() on s390 platform which will be
used with recently added generic framework. It drops a redundant similar
check in vmem_add_mapping() while compensating __segment_load() with a new
address range check to preserve the existing functionality. It also adds a
VM_BUG_ON() check that would ensure that memhp_range_allowed() has already
been called on the hotplug path.

Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: linux-s390@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/s390/mm/extmem.c |  5 +++++
 arch/s390/mm/init.c   | 10 ++++++++++
 arch/s390/mm/vmem.c   |  4 ----
 3 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
index 5060956b8e7d..cc055a78f7b6 100644
--- a/arch/s390/mm/extmem.c
+++ b/arch/s390/mm/extmem.c
@@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
 		goto out_free_resource;
 	}
 
+	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
+		rc = -ERANGE;
+		goto out_resource;
+	}
+
 	rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1);
 	if (rc)
 		goto out_resource;
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 77767850d0d0..64937baabf93 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -278,6 +278,15 @@ device_initcall(s390_cma_mem_init);
 
 #endif /* CONFIG_CMA */
 
+struct range arch_get_mappable_range(void)
+{
+	struct range memhp_range;
+
+	memhp_range.start = 0;
+	memhp_range.end =  VMEM_MAX_PHYS;
+	return memhp_range;
+}
+
 int arch_add_memory(int nid, u64 start, u64 size,
 		    struct mhp_params *params)
 {
@@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
 		return -EINVAL;
 
+	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
 	rc = vmem_add_mapping(start, size);
 	if (rc)
 		return rc;
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index b239f2ba93b0..749eab43aa93 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -536,10 +536,6 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
 {
 	int ret;
 
-	if (start + size > VMEM_MAX_PHYS ||
-	    start + size < start)
-		return -ERANGE;
-
 	mutex_lock(&vmem_mutex);
 	ret = vmem_add_range(start, size);
 	if (ret)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-08  4:16   ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-08  4:16 UTC (permalink / raw)
  To: linux-mm, akpm, david, hca, catalin.marinas
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, Anshuman Khandual,
	linux-kernel, Will Deacon, Ard Biesheuvel, linux-arm-kernel

This overrides arch_get_mappabble_range() on s390 platform which will be
used with recently added generic framework. It drops a redundant similar
check in vmem_add_mapping() while compensating __segment_load() with a new
address range check to preserve the existing functionality. It also adds a
VM_BUG_ON() check that would ensure that memhp_range_allowed() has already
been called on the hotplug path.

Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: linux-s390@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/s390/mm/extmem.c |  5 +++++
 arch/s390/mm/init.c   | 10 ++++++++++
 arch/s390/mm/vmem.c   |  4 ----
 3 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
index 5060956b8e7d..cc055a78f7b6 100644
--- a/arch/s390/mm/extmem.c
+++ b/arch/s390/mm/extmem.c
@@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
 		goto out_free_resource;
 	}
 
+	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
+		rc = -ERANGE;
+		goto out_resource;
+	}
+
 	rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1);
 	if (rc)
 		goto out_resource;
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 77767850d0d0..64937baabf93 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -278,6 +278,15 @@ device_initcall(s390_cma_mem_init);
 
 #endif /* CONFIG_CMA */
 
+struct range arch_get_mappable_range(void)
+{
+	struct range memhp_range;
+
+	memhp_range.start = 0;
+	memhp_range.end =  VMEM_MAX_PHYS;
+	return memhp_range;
+}
+
 int arch_add_memory(int nid, u64 start, u64 size,
 		    struct mhp_params *params)
 {
@@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
 		return -EINVAL;
 
+	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
 	rc = vmem_add_mapping(start, size);
 	if (rc)
 		return rc;
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index b239f2ba93b0..749eab43aa93 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -536,10 +536,6 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
 {
 	int ret;
 
-	if (start + size > VMEM_MAX_PHYS ||
-	    start + size < start)
-		return -ERANGE;
-
 	mutex_lock(&vmem_mutex);
 	ret = vmem_add_range(start, size);
 	if (ret)
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-08  4:16   ` Anshuman Khandual
@ 2020-12-08 15:27     ` Heiko Carstens
  -1 siblings, 0 replies; 32+ messages in thread
From: Heiko Carstens @ 2020-12-08 15:27 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-mm, akpm, david, catalin.marinas, linux-arm-kernel,
	linux-s390, linux-kernel, Vasily Gorbik, Will Deacon,
	Ard Biesheuvel, Mark Rutland

On Tue, Dec 08, 2020 at 09:46:18AM +0530, Anshuman Khandual wrote:
> This overrides arch_get_mappabble_range() on s390 platform which will be
> used with recently added generic framework. It drops a redundant similar
> check in vmem_add_mapping() while compensating __segment_load() with a new
> address range check to preserve the existing functionality. It also adds a
> VM_BUG_ON() check that would ensure that memhp_range_allowed() has already
> been called on the hotplug path.
> 
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: linux-s390@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>  arch/s390/mm/extmem.c |  5 +++++
>  arch/s390/mm/init.c   | 10 ++++++++++
>  arch/s390/mm/vmem.c   |  4 ----
>  3 files changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
> index 5060956b8e7d..cc055a78f7b6 100644
> --- a/arch/s390/mm/extmem.c
> +++ b/arch/s390/mm/extmem.c
> @@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
>  		goto out_free_resource;
>  	}
>  
> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
> +		rc = -ERANGE;
> +		goto out_resource;
> +	}
> +
>  	rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1);
>  	if (rc)
>  		goto out_resource;
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index 77767850d0d0..64937baabf93 100644
> --- a/arch/s390/mm/init.c
> +++ b/arch/s390/mm/init.c
> @@ -278,6 +278,15 @@ device_initcall(s390_cma_mem_init);
>  
>  #endif /* CONFIG_CMA */
>  
> +struct range arch_get_mappable_range(void)
> +{
> +	struct range memhp_range;
> +
> +	memhp_range.start = 0;
> +	memhp_range.end =  VMEM_MAX_PHYS;
> +	return memhp_range;
> +}
> +
>  int arch_add_memory(int nid, u64 start, u64 size,
>  		    struct mhp_params *params)
>  {
> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>  		return -EINVAL;
>  
> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>  	rc = vmem_add_mapping(start, size);
>  	if (rc)
>  		return rc;
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index b239f2ba93b0..749eab43aa93 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -536,10 +536,6 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
>  {
>  	int ret;
>  
> -	if (start + size > VMEM_MAX_PHYS ||
> -	    start + size < start)
> -		return -ERANGE;
> -

Is there a reason why you added the memhp_range_allowed() check call
to arch_add_memory() instead of vmem_add_mapping()? If you would do
that, then the extra code in __segment_load() wouldn't be
required.
Even though the error message from memhp_range_allowed() might be
highly confusing.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-08 15:27     ` Heiko Carstens
  0 siblings, 0 replies; 32+ messages in thread
From: Heiko Carstens @ 2020-12-08 15:27 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, david, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On Tue, Dec 08, 2020 at 09:46:18AM +0530, Anshuman Khandual wrote:
> This overrides arch_get_mappabble_range() on s390 platform which will be
> used with recently added generic framework. It drops a redundant similar
> check in vmem_add_mapping() while compensating __segment_load() with a new
> address range check to preserve the existing functionality. It also adds a
> VM_BUG_ON() check that would ensure that memhp_range_allowed() has already
> been called on the hotplug path.
> 
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: linux-s390@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>  arch/s390/mm/extmem.c |  5 +++++
>  arch/s390/mm/init.c   | 10 ++++++++++
>  arch/s390/mm/vmem.c   |  4 ----
>  3 files changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
> index 5060956b8e7d..cc055a78f7b6 100644
> --- a/arch/s390/mm/extmem.c
> +++ b/arch/s390/mm/extmem.c
> @@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
>  		goto out_free_resource;
>  	}
>  
> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
> +		rc = -ERANGE;
> +		goto out_resource;
> +	}
> +
>  	rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1);
>  	if (rc)
>  		goto out_resource;
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index 77767850d0d0..64937baabf93 100644
> --- a/arch/s390/mm/init.c
> +++ b/arch/s390/mm/init.c
> @@ -278,6 +278,15 @@ device_initcall(s390_cma_mem_init);
>  
>  #endif /* CONFIG_CMA */
>  
> +struct range arch_get_mappable_range(void)
> +{
> +	struct range memhp_range;
> +
> +	memhp_range.start = 0;
> +	memhp_range.end =  VMEM_MAX_PHYS;
> +	return memhp_range;
> +}
> +
>  int arch_add_memory(int nid, u64 start, u64 size,
>  		    struct mhp_params *params)
>  {
> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>  		return -EINVAL;
>  
> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>  	rc = vmem_add_mapping(start, size);
>  	if (rc)
>  		return rc;
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index b239f2ba93b0..749eab43aa93 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -536,10 +536,6 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
>  {
>  	int ret;
>  
> -	if (start + size > VMEM_MAX_PHYS ||
> -	    start + size < start)
> -		return -ERANGE;
> -

Is there a reason why you added the memhp_range_allowed() check call
to arch_add_memory() instead of vmem_add_mapping()? If you would do
that, then the extra code in __segment_load() wouldn't be
required.
Even though the error message from memhp_range_allowed() might be
highly confusing.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-08 15:27     ` Heiko Carstens
@ 2020-12-09  2:37       ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-09  2:37 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: linux-mm, akpm, david, catalin.marinas, linux-arm-kernel,
	linux-s390, linux-kernel, Vasily Gorbik, Will Deacon,
	Ard Biesheuvel, Mark Rutland



On 12/8/20 8:57 PM, Heiko Carstens wrote:
> On Tue, Dec 08, 2020 at 09:46:18AM +0530, Anshuman Khandual wrote:
>> This overrides arch_get_mappabble_range() on s390 platform which will be
>> used with recently added generic framework. It drops a redundant similar
>> check in vmem_add_mapping() while compensating __segment_load() with a new
>> address range check to preserve the existing functionality. It also adds a
>> VM_BUG_ON() check that would ensure that memhp_range_allowed() has already
>> been called on the hotplug path.
>>
>> Cc: Heiko Carstens <hca@linux.ibm.com>
>> Cc: Vasily Gorbik <gor@linux.ibm.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: linux-s390@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>>  arch/s390/mm/extmem.c |  5 +++++
>>  arch/s390/mm/init.c   | 10 ++++++++++
>>  arch/s390/mm/vmem.c   |  4 ----
>>  3 files changed, 15 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
>> index 5060956b8e7d..cc055a78f7b6 100644
>> --- a/arch/s390/mm/extmem.c
>> +++ b/arch/s390/mm/extmem.c
>> @@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
>>  		goto out_free_resource;
>>  	}
>>  
>> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
>> +		rc = -ERANGE;
>> +		goto out_resource;
>> +	}
>> +
>>  	rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1);
>>  	if (rc)
>>  		goto out_resource;
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index 77767850d0d0..64937baabf93 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -278,6 +278,15 @@ device_initcall(s390_cma_mem_init);
>>  
>>  #endif /* CONFIG_CMA */
>>  
>> +struct range arch_get_mappable_range(void)
>> +{
>> +	struct range memhp_range;
>> +
>> +	memhp_range.start = 0;
>> +	memhp_range.end =  VMEM_MAX_PHYS;
>> +	return memhp_range;
>> +}
>> +
>>  int arch_add_memory(int nid, u64 start, u64 size,
>>  		    struct mhp_params *params)
>>  {
>> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>  		return -EINVAL;
>>  
>> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>  	rc = vmem_add_mapping(start, size);
>>  	if (rc)
>>  		return rc;
>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>> index b239f2ba93b0..749eab43aa93 100644
>> --- a/arch/s390/mm/vmem.c
>> +++ b/arch/s390/mm/vmem.c
>> @@ -536,10 +536,6 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
>>  {
>>  	int ret;
>>  
>> -	if (start + size > VMEM_MAX_PHYS ||
>> -	    start + size < start)
>> -		return -ERANGE;
>> -
> 
> Is there a reason why you added the memhp_range_allowed() check call
> to arch_add_memory() instead of vmem_add_mapping()? If you would do

As I had mentioned previously, memhp_range_allowed() is available with
CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence
there will be a build failure in vmem_add_mapping() for the range check
memhp_range_allowed() without memory hotplug enabled.

> that, then the extra code in __segment_load() wouldn't be
> required.
> Even though the error message from memhp_range_allowed() might be
> highly confusing.
Alternatively leaving __segment_load() and vmem_add_memory() unchanged
will create three range checks i.e two memhp_range_allowed() and the
existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
paths, which is not optimal.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-09  2:37       ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-09  2:37 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, david, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel



On 12/8/20 8:57 PM, Heiko Carstens wrote:
> On Tue, Dec 08, 2020 at 09:46:18AM +0530, Anshuman Khandual wrote:
>> This overrides arch_get_mappabble_range() on s390 platform which will be
>> used with recently added generic framework. It drops a redundant similar
>> check in vmem_add_mapping() while compensating __segment_load() with a new
>> address range check to preserve the existing functionality. It also adds a
>> VM_BUG_ON() check that would ensure that memhp_range_allowed() has already
>> been called on the hotplug path.
>>
>> Cc: Heiko Carstens <hca@linux.ibm.com>
>> Cc: Vasily Gorbik <gor@linux.ibm.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: linux-s390@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>>  arch/s390/mm/extmem.c |  5 +++++
>>  arch/s390/mm/init.c   | 10 ++++++++++
>>  arch/s390/mm/vmem.c   |  4 ----
>>  3 files changed, 15 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
>> index 5060956b8e7d..cc055a78f7b6 100644
>> --- a/arch/s390/mm/extmem.c
>> +++ b/arch/s390/mm/extmem.c
>> @@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
>>  		goto out_free_resource;
>>  	}
>>  
>> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
>> +		rc = -ERANGE;
>> +		goto out_resource;
>> +	}
>> +
>>  	rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1);
>>  	if (rc)
>>  		goto out_resource;
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index 77767850d0d0..64937baabf93 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -278,6 +278,15 @@ device_initcall(s390_cma_mem_init);
>>  
>>  #endif /* CONFIG_CMA */
>>  
>> +struct range arch_get_mappable_range(void)
>> +{
>> +	struct range memhp_range;
>> +
>> +	memhp_range.start = 0;
>> +	memhp_range.end =  VMEM_MAX_PHYS;
>> +	return memhp_range;
>> +}
>> +
>>  int arch_add_memory(int nid, u64 start, u64 size,
>>  		    struct mhp_params *params)
>>  {
>> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>  		return -EINVAL;
>>  
>> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>  	rc = vmem_add_mapping(start, size);
>>  	if (rc)
>>  		return rc;
>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>> index b239f2ba93b0..749eab43aa93 100644
>> --- a/arch/s390/mm/vmem.c
>> +++ b/arch/s390/mm/vmem.c
>> @@ -536,10 +536,6 @@ int vmem_add_mapping(unsigned long start, unsigned long size)
>>  {
>>  	int ret;
>>  
>> -	if (start + size > VMEM_MAX_PHYS ||
>> -	    start + size < start)
>> -		return -ERANGE;
>> -
> 
> Is there a reason why you added the memhp_range_allowed() check call
> to arch_add_memory() instead of vmem_add_mapping()? If you would do

As I had mentioned previously, memhp_range_allowed() is available with
CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence
there will be a build failure in vmem_add_mapping() for the range check
memhp_range_allowed() without memory hotplug enabled.

> that, then the extra code in __segment_load() wouldn't be
> required.
> Even though the error message from memhp_range_allowed() might be
> highly confusing.
Alternatively leaving __segment_load() and vmem_add_memory() unchanged
will create three range checks i.e two memhp_range_allowed() and the
existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
paths, which is not optimal.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-09  2:37       ` Anshuman Khandual
@ 2020-12-09 14:57         ` Heiko Carstens
  -1 siblings, 0 replies; 32+ messages in thread
From: Heiko Carstens @ 2020-12-09 14:57 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-mm, akpm, david, catalin.marinas, linux-arm-kernel,
	linux-s390, linux-kernel, Vasily Gorbik, Will Deacon,
	Ard Biesheuvel, Mark Rutland

On Wed, Dec 09, 2020 at 08:07:04AM +0530, Anshuman Khandual wrote:
> >> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
> >> +		rc = -ERANGE;
> >> +		goto out_resource;
> >> +	}
> >> +
...
> >> +struct range arch_get_mappable_range(void)
> >> +{
> >> +	struct range memhp_range;
> >> +
> >> +	memhp_range.start = 0;
> >> +	memhp_range.end =  VMEM_MAX_PHYS;
> >> +	return memhp_range;
> >> +}
> >> +
> >>  int arch_add_memory(int nid, u64 start, u64 size,
> >>  		    struct mhp_params *params)
> >>  {
> >> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
> >>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
> >>  		return -EINVAL;
> >>  
> >> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
> >>  	rc = vmem_add_mapping(start, size);
> >>  	if (rc)
> > Is there a reason why you added the memhp_range_allowed() check call
> > to arch_add_memory() instead of vmem_add_mapping()? If you would do
> 
> As I had mentioned previously, memhp_range_allowed() is available with
> CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence
> there will be a build failure in vmem_add_mapping() for the range check
> memhp_range_allowed() without memory hotplug enabled.
> 
> > that, then the extra code in __segment_load() wouldn't be
> > required.
> > Even though the error message from memhp_range_allowed() might be
> > highly confusing.
>
> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
> will create three range checks i.e two memhp_range_allowed() and the
> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
> paths, which is not optimal.

Ah, sorry. I didn't follow this discussion too closely. I just thought
my point of view would be clear: let's not have two different ways to
check for the same thing which must be kept in sync.
Therefore I was wondering why this next version is still doing
that. Please find a way to solve this.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-09 14:57         ` Heiko Carstens
  0 siblings, 0 replies; 32+ messages in thread
From: Heiko Carstens @ 2020-12-09 14:57 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, david, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On Wed, Dec 09, 2020 at 08:07:04AM +0530, Anshuman Khandual wrote:
> >> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
> >> +		rc = -ERANGE;
> >> +		goto out_resource;
> >> +	}
> >> +
...
> >> +struct range arch_get_mappable_range(void)
> >> +{
> >> +	struct range memhp_range;
> >> +
> >> +	memhp_range.start = 0;
> >> +	memhp_range.end =  VMEM_MAX_PHYS;
> >> +	return memhp_range;
> >> +}
> >> +
> >>  int arch_add_memory(int nid, u64 start, u64 size,
> >>  		    struct mhp_params *params)
> >>  {
> >> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
> >>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
> >>  		return -EINVAL;
> >>  
> >> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
> >>  	rc = vmem_add_mapping(start, size);
> >>  	if (rc)
> > Is there a reason why you added the memhp_range_allowed() check call
> > to arch_add_memory() instead of vmem_add_mapping()? If you would do
> 
> As I had mentioned previously, memhp_range_allowed() is available with
> CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence
> there will be a build failure in vmem_add_mapping() for the range check
> memhp_range_allowed() without memory hotplug enabled.
> 
> > that, then the extra code in __segment_load() wouldn't be
> > required.
> > Even though the error message from memhp_range_allowed() might be
> > highly confusing.
>
> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
> will create three range checks i.e two memhp_range_allowed() and the
> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
> paths, which is not optimal.

Ah, sorry. I didn't follow this discussion too closely. I just thought
my point of view would be clear: let's not have two different ways to
check for the same thing which must be kept in sync.
Therefore I was wondering why this next version is still doing
that. Please find a way to solve this.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-09 14:57         ` Heiko Carstens
@ 2020-12-10  4:18           ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-10  4:18 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: linux-mm, akpm, david, catalin.marinas, linux-arm-kernel,
	linux-s390, linux-kernel, Vasily Gorbik, Will Deacon,
	Ard Biesheuvel, Mark Rutland



On 12/9/20 8:27 PM, Heiko Carstens wrote:
> On Wed, Dec 09, 2020 at 08:07:04AM +0530, Anshuman Khandual wrote:
>>>> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
>>>> +		rc = -ERANGE;
>>>> +		goto out_resource;
>>>> +	}
>>>> +
> ...
>>>> +struct range arch_get_mappable_range(void)
>>>> +{
>>>> +	struct range memhp_range;
>>>> +
>>>> +	memhp_range.start = 0;
>>>> +	memhp_range.end =  VMEM_MAX_PHYS;
>>>> +	return memhp_range;
>>>> +}
>>>> +
>>>>  int arch_add_memory(int nid, u64 start, u64 size,
>>>>  		    struct mhp_params *params)
>>>>  {
>>>> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>>  		return -EINVAL;
>>>>  
>>>> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>>  	rc = vmem_add_mapping(start, size);
>>>>  	if (rc)
>>> Is there a reason why you added the memhp_range_allowed() check call
>>> to arch_add_memory() instead of vmem_add_mapping()? If you would do
>>
>> As I had mentioned previously, memhp_range_allowed() is available with
>> CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence
>> there will be a build failure in vmem_add_mapping() for the range check
>> memhp_range_allowed() without memory hotplug enabled.
>>
>>> that, then the extra code in __segment_load() wouldn't be
>>> required.
>>> Even though the error message from memhp_range_allowed() might be
>>> highly confusing.
>>
>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>> will create three range checks i.e two memhp_range_allowed() and the
>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>> paths, which is not optimal.
> 
> Ah, sorry. I didn't follow this discussion too closely. I just thought
> my point of view would be clear: let's not have two different ways to
> check for the same thing which must be kept in sync.
> Therefore I was wondering why this next version is still doing
> that. Please find a way to solve this.

The following change is after the current series and should work with
and without memory hotplug enabled. There will be just a single place
i.e vmem_get_max_addr() to update in case the maximum address changes
from VMEM_MAX_PHYS to something else later.

diff --git a/arch/s390/include/asm/sections.h b/arch/s390/include/asm/sections.h
index 0c21514..2da496f 100644
--- a/arch/s390/include/asm/sections.h
+++ b/arch/s390/include/asm/sections.h
@@ -16,6 +16,7 @@ static inline int arch_is_kernel_initmem_freed(unsigned long addr)
 	       addr < (unsigned long)__init_end;
 }
 
+unsigned long vmem_get_max_addr(void);
 /*
  * .boot.data section contains variables "shared" between the decompressor and
  * the decompressed kernel. The decompressor will store values in them, and
diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
index cc055a7..1bddd6f 100644
--- a/arch/s390/mm/extmem.c
+++ b/arch/s390/mm/extmem.c
@@ -28,6 +28,7 @@
 #include <asm/extmem.h>
 #include <asm/cpcmd.h>
 #include <asm/setup.h>
+#include <asm/sections.h>
 
 #define DCSS_PURGESEG   0x08
 #define DCSS_LOADSHRX	0x20
@@ -287,6 +288,13 @@ segment_overlaps_others (struct dcss_segment *seg)
 	return 0;
 }
 
+static bool segment_outside_range(struct dcss_segment *seg)
+{
+	unsigned long max_addr = vmem_get_max_addr();
+
+	return (seg->end + 1 > max_addr || seg->end + 1 < seg->start_addr);
+}
+
 /*
  * real segment loading function, called from segment_load
  */
@@ -337,7 +345,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
 		goto out_free_resource;
 	}
 
-	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
+	if (segment_outside_range(seg)) {
 		rc = -ERANGE;
 		goto out_resource;
 	}
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 64937ba..5c6ee9f 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -283,7 +283,7 @@ struct range arch_get_mappable_range(void)
 	struct range memhp_range;
 
 	memhp_range.start = 0;
-	memhp_range.end =  VMEM_MAX_PHYS;
+	memhp_range.end =  vmem_get_max_addr();
 	return memhp_range;
 }
 
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 749eab4..6044e85 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -532,6 +532,11 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
 	mutex_unlock(&vmem_mutex);
 }
 
+unsigned long vmem_get_max_addr(void)
+{
+        return VMEM_MAX_PHYS;
+}
+
 int vmem_add_mapping(unsigned long start, unsigned long size)
 {
 	int ret;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  4:18           ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-10  4:18 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, david, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel



On 12/9/20 8:27 PM, Heiko Carstens wrote:
> On Wed, Dec 09, 2020 at 08:07:04AM +0530, Anshuman Khandual wrote:
>>>> +	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
>>>> +		rc = -ERANGE;
>>>> +		goto out_resource;
>>>> +	}
>>>> +
> ...
>>>> +struct range arch_get_mappable_range(void)
>>>> +{
>>>> +	struct range memhp_range;
>>>> +
>>>> +	memhp_range.start = 0;
>>>> +	memhp_range.end =  VMEM_MAX_PHYS;
>>>> +	return memhp_range;
>>>> +}
>>>> +
>>>>  int arch_add_memory(int nid, u64 start, u64 size,
>>>>  		    struct mhp_params *params)
>>>>  {
>>>> @@ -291,6 +300,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>>  	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>>  		return -EINVAL;
>>>>  
>>>> +	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>>  	rc = vmem_add_mapping(start, size);
>>>>  	if (rc)
>>> Is there a reason why you added the memhp_range_allowed() check call
>>> to arch_add_memory() instead of vmem_add_mapping()? If you would do
>>
>> As I had mentioned previously, memhp_range_allowed() is available with
>> CONFIG_MEMORY_HOTPLUG but vmem_add_mapping() is always available. Hence
>> there will be a build failure in vmem_add_mapping() for the range check
>> memhp_range_allowed() without memory hotplug enabled.
>>
>>> that, then the extra code in __segment_load() wouldn't be
>>> required.
>>> Even though the error message from memhp_range_allowed() might be
>>> highly confusing.
>>
>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>> will create three range checks i.e two memhp_range_allowed() and the
>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>> paths, which is not optimal.
> 
> Ah, sorry. I didn't follow this discussion too closely. I just thought
> my point of view would be clear: let's not have two different ways to
> check for the same thing which must be kept in sync.
> Therefore I was wondering why this next version is still doing
> that. Please find a way to solve this.

The following change is after the current series and should work with
and without memory hotplug enabled. There will be just a single place
i.e vmem_get_max_addr() to update in case the maximum address changes
from VMEM_MAX_PHYS to something else later.

diff --git a/arch/s390/include/asm/sections.h b/arch/s390/include/asm/sections.h
index 0c21514..2da496f 100644
--- a/arch/s390/include/asm/sections.h
+++ b/arch/s390/include/asm/sections.h
@@ -16,6 +16,7 @@ static inline int arch_is_kernel_initmem_freed(unsigned long addr)
 	       addr < (unsigned long)__init_end;
 }
 
+unsigned long vmem_get_max_addr(void);
 /*
  * .boot.data section contains variables "shared" between the decompressor and
  * the decompressed kernel. The decompressor will store values in them, and
diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
index cc055a7..1bddd6f 100644
--- a/arch/s390/mm/extmem.c
+++ b/arch/s390/mm/extmem.c
@@ -28,6 +28,7 @@
 #include <asm/extmem.h>
 #include <asm/cpcmd.h>
 #include <asm/setup.h>
+#include <asm/sections.h>
 
 #define DCSS_PURGESEG   0x08
 #define DCSS_LOADSHRX	0x20
@@ -287,6 +288,13 @@ segment_overlaps_others (struct dcss_segment *seg)
 	return 0;
 }
 
+static bool segment_outside_range(struct dcss_segment *seg)
+{
+	unsigned long max_addr = vmem_get_max_addr();
+
+	return (seg->end + 1 > max_addr || seg->end + 1 < seg->start_addr);
+}
+
 /*
  * real segment loading function, called from segment_load
  */
@@ -337,7 +345,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
 		goto out_free_resource;
 	}
 
-	if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) {
+	if (segment_outside_range(seg)) {
 		rc = -ERANGE;
 		goto out_resource;
 	}
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 64937ba..5c6ee9f 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -283,7 +283,7 @@ struct range arch_get_mappable_range(void)
 	struct range memhp_range;
 
 	memhp_range.start = 0;
-	memhp_range.end =  VMEM_MAX_PHYS;
+	memhp_range.end =  vmem_get_max_addr();
 	return memhp_range;
 }
 
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 749eab4..6044e85 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -532,6 +532,11 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
 	mutex_unlock(&vmem_mutex);
 }
 
+unsigned long vmem_get_max_addr(void)
+{
+        return VMEM_MAX_PHYS;
+}
+
 int vmem_add_mapping(unsigned long start, unsigned long size)
 {
 	int ret;
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  4:18           ` Anshuman Khandual
@ 2020-12-10  6:58             ` Heiko Carstens
  -1 siblings, 0 replies; 32+ messages in thread
From: Heiko Carstens @ 2020-12-10  6:58 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-mm, akpm, david, catalin.marinas, linux-arm-kernel,
	linux-s390, linux-kernel, Vasily Gorbik, Will Deacon,
	Ard Biesheuvel, Mark Rutland

On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
> >> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
> >> will create three range checks i.e two memhp_range_allowed() and the
> >> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
> >> paths, which is not optimal.
> > 
> > Ah, sorry. I didn't follow this discussion too closely. I just thought
> > my point of view would be clear: let's not have two different ways to
> > check for the same thing which must be kept in sync.
> > Therefore I was wondering why this next version is still doing
> > that. Please find a way to solve this.
> 
> The following change is after the current series and should work with
> and without memory hotplug enabled. There will be just a single place
> i.e vmem_get_max_addr() to update in case the maximum address changes
> from VMEM_MAX_PHYS to something else later.

Still not. That's way too much code churn for what you want to achieve.
If the s390 specific patch would look like below you can add

Acked-by: Heiko Carstens <hca@linux.ibm.com>

But please make sure that the arch_get_mappable_range() prototype in
linux/memory_hotplug.h is always visible and does not depend on
CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
because of this.

Thanks.

diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 77767850d0d0..e0e78234ae57 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
 		return -EINVAL;
 
+	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
 	rc = vmem_add_mapping(start, size);
 	if (rc)
 		return rc;
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index b239f2ba93b0..ccd55e2f97f9 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -4,6 +4,7 @@
  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
  */
 
+#include <linux/memory_hotplug.h>
 #include <linux/memblock.h>
 #include <linux/pfn.h>
 #include <linux/mm.h>
@@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
 	mutex_unlock(&vmem_mutex);
 }
 
+struct range arch_get_mappable_range(void)
+{
+	struct range range;
+
+	range.start = 0;
+	range.end = VMEM_MAX_PHYS;
+	return range;
+}
+
 int vmem_add_mapping(unsigned long start, unsigned long size)
 {
+	struct range range;
 	int ret;
 
-	if (start + size > VMEM_MAX_PHYS ||
+	range = arch_get_mappable_range();
+	if (start < range.start ||
+	    start + size > range.end ||
 	    start + size < start)
 		return -ERANGE;
 

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  6:58             ` Heiko Carstens
  0 siblings, 0 replies; 32+ messages in thread
From: Heiko Carstens @ 2020-12-10  6:58 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, david, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
> >> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
> >> will create three range checks i.e two memhp_range_allowed() and the
> >> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
> >> paths, which is not optimal.
> > 
> > Ah, sorry. I didn't follow this discussion too closely. I just thought
> > my point of view would be clear: let's not have two different ways to
> > check for the same thing which must be kept in sync.
> > Therefore I was wondering why this next version is still doing
> > that. Please find a way to solve this.
> 
> The following change is after the current series and should work with
> and without memory hotplug enabled. There will be just a single place
> i.e vmem_get_max_addr() to update in case the maximum address changes
> from VMEM_MAX_PHYS to something else later.

Still not. That's way too much code churn for what you want to achieve.
If the s390 specific patch would look like below you can add

Acked-by: Heiko Carstens <hca@linux.ibm.com>

But please make sure that the arch_get_mappable_range() prototype in
linux/memory_hotplug.h is always visible and does not depend on
CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
because of this.

Thanks.

diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 77767850d0d0..e0e78234ae57 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
 		return -EINVAL;
 
+	VM_BUG_ON(!memhp_range_allowed(start, size, 1));
 	rc = vmem_add_mapping(start, size);
 	if (rc)
 		return rc;
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index b239f2ba93b0..ccd55e2f97f9 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -4,6 +4,7 @@
  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
  */
 
+#include <linux/memory_hotplug.h>
 #include <linux/memblock.h>
 #include <linux/pfn.h>
 #include <linux/mm.h>
@@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
 	mutex_unlock(&vmem_mutex);
 }
 
+struct range arch_get_mappable_range(void)
+{
+	struct range range;
+
+	range.start = 0;
+	range.end = VMEM_MAX_PHYS;
+	return range;
+}
+
 int vmem_add_mapping(unsigned long start, unsigned long size)
 {
+	struct range range;
 	int ret;
 
-	if (start + size > VMEM_MAX_PHYS ||
+	range = arch_get_mappable_range();
+	if (start < range.start ||
+	    start + size > range.end ||
 	    start + size < start)
 		return -ERANGE;
 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  6:58             ` Heiko Carstens
@ 2020-12-10  7:04               ` David Hildenbrand
  -1 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-10  7:04 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Anshuman Khandual, linux-mm, akpm, david, catalin.marinas,
	linux-arm-kernel, linux-s390, linux-kernel, Vasily Gorbik,
	Will Deacon, Ard Biesheuvel, Mark Rutland


> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
> 
> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>> paths, which is not optimal.
>>> 
>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>> my point of view would be clear: let's not have two different ways to
>>> check for the same thing which must be kept in sync.
>>> Therefore I was wondering why this next version is still doing
>>> that. Please find a way to solve this.
>> 
>> The following change is after the current series and should work with
>> and without memory hotplug enabled. There will be just a single place
>> i.e vmem_get_max_addr() to update in case the maximum address changes
>> from VMEM_MAX_PHYS to something else later.
> 
> Still not. That's way too much code churn for what you want to achieve.
> If the s390 specific patch would look like below you can add
> 
> Acked-by: Heiko Carstens <hca@linux.ibm.com>
> 
> But please make sure that the arch_get_mappable_range() prototype in
> linux/memory_hotplug.h is always visible and does not depend on
> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
> because of this.
> 
> Thanks.
> 
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index 77767850d0d0..e0e78234ae57 100644
> --- a/arch/s390/mm/init.c
> +++ b/arch/s390/mm/init.c
> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>        return -EINVAL;
> 
> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>    rc = vmem_add_mapping(start, size);
>    if (rc)
>        return rc;
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index b239f2ba93b0..ccd55e2f97f9 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -4,6 +4,7 @@
>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>  */
> 
> +#include <linux/memory_hotplug.h>
> #include <linux/memblock.h>
> #include <linux/pfn.h>
> #include <linux/mm.h>
> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>    mutex_unlock(&vmem_mutex);
> }
> 
> +struct range arch_get_mappable_range(void)
> +{
> +    struct range range;
> +
> +    range.start = 0;
> +    range.end = VMEM_MAX_PHYS;
> +    return range;
> +}
> +
> int vmem_add_mapping(unsigned long start, unsigned long size)
> {
> +    struct range range;
>    int ret;
> 
> -    if (start + size > VMEM_MAX_PHYS ||
> +    range = arch_get_mappable_range();
> +    if (start < range.start ||
> +        start + size > range.end ||
>        start + size < start)
>        return -ERANGE;
> 
> 

Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  7:04               ` David Hildenbrand
  0 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-10  7:04 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, david, catalin.marinas,
	Anshuman Khandual, linux-kernel, linux-mm, akpm, Will Deacon,
	Ard Biesheuvel, linux-arm-kernel


> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
> 
> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>> paths, which is not optimal.
>>> 
>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>> my point of view would be clear: let's not have two different ways to
>>> check for the same thing which must be kept in sync.
>>> Therefore I was wondering why this next version is still doing
>>> that. Please find a way to solve this.
>> 
>> The following change is after the current series and should work with
>> and without memory hotplug enabled. There will be just a single place
>> i.e vmem_get_max_addr() to update in case the maximum address changes
>> from VMEM_MAX_PHYS to something else later.
> 
> Still not. That's way too much code churn for what you want to achieve.
> If the s390 specific patch would look like below you can add
> 
> Acked-by: Heiko Carstens <hca@linux.ibm.com>
> 
> But please make sure that the arch_get_mappable_range() prototype in
> linux/memory_hotplug.h is always visible and does not depend on
> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
> because of this.
> 
> Thanks.
> 
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index 77767850d0d0..e0e78234ae57 100644
> --- a/arch/s390/mm/init.c
> +++ b/arch/s390/mm/init.c
> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>        return -EINVAL;
> 
> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>    rc = vmem_add_mapping(start, size);
>    if (rc)
>        return rc;
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index b239f2ba93b0..ccd55e2f97f9 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -4,6 +4,7 @@
>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>  */
> 
> +#include <linux/memory_hotplug.h>
> #include <linux/memblock.h>
> #include <linux/pfn.h>
> #include <linux/mm.h>
> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>    mutex_unlock(&vmem_mutex);
> }
> 
> +struct range arch_get_mappable_range(void)
> +{
> +    struct range range;
> +
> +    range.start = 0;
> +    range.end = VMEM_MAX_PHYS;
> +    return range;
> +}
> +
> int vmem_add_mapping(unsigned long start, unsigned long size)
> {
> +    struct range range;
>    int ret;
> 
> -    if (start + size > VMEM_MAX_PHYS ||
> +    range = arch_get_mappable_range();
> +    if (start < range.start ||
> +        start + size > range.end ||
>        start + size < start)
>        return -ERANGE;
> 
> 

Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  7:04               ` David Hildenbrand
@ 2020-12-10  7:40                 ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-10  7:40 UTC (permalink / raw)
  To: David Hildenbrand, Heiko Carstens
  Cc: linux-mm, akpm, catalin.marinas, linux-arm-kernel, linux-s390,
	linux-kernel, Vasily Gorbik, Will Deacon, Ard Biesheuvel,
	Mark Rutland



On 12/10/20 12:34 PM, David Hildenbrand wrote:
> 
>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>
>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>> paths, which is not optimal.
>>>>
>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>> my point of view would be clear: let's not have two different ways to
>>>> check for the same thing which must be kept in sync.
>>>> Therefore I was wondering why this next version is still doing
>>>> that. Please find a way to solve this.
>>>
>>> The following change is after the current series and should work with
>>> and without memory hotplug enabled. There will be just a single place
>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>> from VMEM_MAX_PHYS to something else later.
>>
>> Still not. That's way too much code churn for what you want to achieve.
>> If the s390 specific patch would look like below you can add
>>
>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>
>> But please make sure that the arch_get_mappable_range() prototype in
>> linux/memory_hotplug.h is always visible and does not depend on
>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>> because of this.
>>
>> Thanks.
>>
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index 77767850d0d0..e0e78234ae57 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>        return -EINVAL;
>>
>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>    rc = vmem_add_mapping(start, size);
>>    if (rc)
>>        return rc;
>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>> index b239f2ba93b0..ccd55e2f97f9 100644
>> --- a/arch/s390/mm/vmem.c
>> +++ b/arch/s390/mm/vmem.c
>> @@ -4,6 +4,7 @@
>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>  */
>>
>> +#include <linux/memory_hotplug.h>
>> #include <linux/memblock.h>
>> #include <linux/pfn.h>
>> #include <linux/mm.h>
>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>    mutex_unlock(&vmem_mutex);
>> }
>>
>> +struct range arch_get_mappable_range(void)
>> +{
>> +    struct range range;
>> +
>> +    range.start = 0;
>> +    range.end = VMEM_MAX_PHYS;
>> +    return range;
>> +}
>> +
>> int vmem_add_mapping(unsigned long start, unsigned long size)
>> {
>> +    struct range range;
>>    int ret;
>>
>> -    if (start + size > VMEM_MAX_PHYS ||
>> +    range = arch_get_mappable_range();
>> +    if (start < range.start ||
>> +        start + size > range.end ||
>>        start + size < start)
>>        return -ERANGE;
>>
>>
> 
> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
Didn't quite understand "Not sure if we really need new checks in common code".
Could you please be more specific. New checks as in pagemap_range() ? Because
other places it is either replacing erstwhile check_hotplug_memory_addressable()
or just moving existing checks from platform arch_add_memory() to the beginning
of various hotplug paths.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  7:40                 ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-10  7:40 UTC (permalink / raw)
  To: David Hildenbrand, Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel



On 12/10/20 12:34 PM, David Hildenbrand wrote:
> 
>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>
>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>> paths, which is not optimal.
>>>>
>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>> my point of view would be clear: let's not have two different ways to
>>>> check for the same thing which must be kept in sync.
>>>> Therefore I was wondering why this next version is still doing
>>>> that. Please find a way to solve this.
>>>
>>> The following change is after the current series and should work with
>>> and without memory hotplug enabled. There will be just a single place
>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>> from VMEM_MAX_PHYS to something else later.
>>
>> Still not. That's way too much code churn for what you want to achieve.
>> If the s390 specific patch would look like below you can add
>>
>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>
>> But please make sure that the arch_get_mappable_range() prototype in
>> linux/memory_hotplug.h is always visible and does not depend on
>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>> because of this.
>>
>> Thanks.
>>
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index 77767850d0d0..e0e78234ae57 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>        return -EINVAL;
>>
>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>    rc = vmem_add_mapping(start, size);
>>    if (rc)
>>        return rc;
>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>> index b239f2ba93b0..ccd55e2f97f9 100644
>> --- a/arch/s390/mm/vmem.c
>> +++ b/arch/s390/mm/vmem.c
>> @@ -4,6 +4,7 @@
>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>  */
>>
>> +#include <linux/memory_hotplug.h>
>> #include <linux/memblock.h>
>> #include <linux/pfn.h>
>> #include <linux/mm.h>
>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>    mutex_unlock(&vmem_mutex);
>> }
>>
>> +struct range arch_get_mappable_range(void)
>> +{
>> +    struct range range;
>> +
>> +    range.start = 0;
>> +    range.end = VMEM_MAX_PHYS;
>> +    return range;
>> +}
>> +
>> int vmem_add_mapping(unsigned long start, unsigned long size)
>> {
>> +    struct range range;
>>    int ret;
>>
>> -    if (start + size > VMEM_MAX_PHYS ||
>> +    range = arch_get_mappable_range();
>> +    if (start < range.start ||
>> +        start + size > range.end ||
>>        start + size < start)
>>        return -ERANGE;
>>
>>
> 
> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
Didn't quite understand "Not sure if we really need new checks in common code".
Could you please be more specific. New checks as in pagemap_range() ? Because
other places it is either replacing erstwhile check_hotplug_memory_addressable()
or just moving existing checks from platform arch_add_memory() to the beginning
of various hotplug paths.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  7:40                 ` Anshuman Khandual
@ 2020-12-10  8:02                   ` David Hildenbrand
  -1 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-10  8:02 UTC (permalink / raw)
  To: Anshuman Khandual, Heiko Carstens
  Cc: linux-mm, akpm, catalin.marinas, linux-arm-kernel, linux-s390,
	linux-kernel, Vasily Gorbik, Will Deacon, Ard Biesheuvel,
	Mark Rutland

On 10.12.20 08:40, Anshuman Khandual wrote:
> 
> 
> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>
>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>
>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>> paths, which is not optimal.
>>>>>
>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>> my point of view would be clear: let's not have two different ways to
>>>>> check for the same thing which must be kept in sync.
>>>>> Therefore I was wondering why this next version is still doing
>>>>> that. Please find a way to solve this.
>>>>
>>>> The following change is after the current series and should work with
>>>> and without memory hotplug enabled. There will be just a single place
>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>> from VMEM_MAX_PHYS to something else later.
>>>
>>> Still not. That's way too much code churn for what you want to achieve.
>>> If the s390 specific patch would look like below you can add
>>>
>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>
>>> But please make sure that the arch_get_mappable_range() prototype in
>>> linux/memory_hotplug.h is always visible and does not depend on
>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>> because of this.
>>>
>>> Thanks.
>>>
>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>> index 77767850d0d0..e0e78234ae57 100644
>>> --- a/arch/s390/mm/init.c
>>> +++ b/arch/s390/mm/init.c
>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>        return -EINVAL;
>>>
>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>    rc = vmem_add_mapping(start, size);
>>>    if (rc)
>>>        return rc;
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -4,6 +4,7 @@
>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>  */
>>>
>>> +#include <linux/memory_hotplug.h>
>>> #include <linux/memblock.h>
>>> #include <linux/pfn.h>
>>> #include <linux/mm.h>
>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>    mutex_unlock(&vmem_mutex);
>>> }
>>>
>>> +struct range arch_get_mappable_range(void)
>>> +{
>>> +    struct range range;
>>> +
>>> +    range.start = 0;
>>> +    range.end = VMEM_MAX_PHYS;
>>> +    return range;
>>> +}
>>> +
>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>> {
>>> +    struct range range;
>>>    int ret;
>>>
>>> -    if (start + size > VMEM_MAX_PHYS ||
>>> +    range = arch_get_mappable_range();
>>> +    if (start < range.start ||
>>> +        start + size > range.end ||
>>>        start + size < start)
>>>        return -ERANGE;
>>>
>>>
>>
>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
> Didn't quite understand "Not sure if we really need new checks in common code".
> Could you please be more specific. New checks as in pagemap_range() ? Because
> other places it is either replacing erstwhile check_hotplug_memory_addressable()
> or just moving existing checks from platform arch_add_memory() to the beginning
> of various hotplug paths.

The main concern I have with current code is that it makes it impossible
for some driver to detect which ranges it could actually later hotplug.
You cannot warn about a strange setup before you actually run into the
issues while trying to add memory. Like returning "-EINVAL" from a
function but not exposing which values are actually valid.

If we have memhp_get_pluggable_range(), we have such a mechanism and

1. Trying to add out-of-range memory will fail (although deep down in
arch code, but at least it fails).

2. There is a way for drivers to find out which values are actually
valid before triggering 1.

For my use case that's good enough. Do you have others in mind that
require new checks in common code (meaning inside add_memory() and friends)?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  8:02                   ` David Hildenbrand
  0 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-10  8:02 UTC (permalink / raw)
  To: Anshuman Khandual, Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On 10.12.20 08:40, Anshuman Khandual wrote:
> 
> 
> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>
>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>
>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>> paths, which is not optimal.
>>>>>
>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>> my point of view would be clear: let's not have two different ways to
>>>>> check for the same thing which must be kept in sync.
>>>>> Therefore I was wondering why this next version is still doing
>>>>> that. Please find a way to solve this.
>>>>
>>>> The following change is after the current series and should work with
>>>> and without memory hotplug enabled. There will be just a single place
>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>> from VMEM_MAX_PHYS to something else later.
>>>
>>> Still not. That's way too much code churn for what you want to achieve.
>>> If the s390 specific patch would look like below you can add
>>>
>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>
>>> But please make sure that the arch_get_mappable_range() prototype in
>>> linux/memory_hotplug.h is always visible and does not depend on
>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>> because of this.
>>>
>>> Thanks.
>>>
>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>> index 77767850d0d0..e0e78234ae57 100644
>>> --- a/arch/s390/mm/init.c
>>> +++ b/arch/s390/mm/init.c
>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>        return -EINVAL;
>>>
>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>    rc = vmem_add_mapping(start, size);
>>>    if (rc)
>>>        return rc;
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -4,6 +4,7 @@
>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>  */
>>>
>>> +#include <linux/memory_hotplug.h>
>>> #include <linux/memblock.h>
>>> #include <linux/pfn.h>
>>> #include <linux/mm.h>
>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>    mutex_unlock(&vmem_mutex);
>>> }
>>>
>>> +struct range arch_get_mappable_range(void)
>>> +{
>>> +    struct range range;
>>> +
>>> +    range.start = 0;
>>> +    range.end = VMEM_MAX_PHYS;
>>> +    return range;
>>> +}
>>> +
>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>> {
>>> +    struct range range;
>>>    int ret;
>>>
>>> -    if (start + size > VMEM_MAX_PHYS ||
>>> +    range = arch_get_mappable_range();
>>> +    if (start < range.start ||
>>> +        start + size > range.end ||
>>>        start + size < start)
>>>        return -ERANGE;
>>>
>>>
>>
>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
> Didn't quite understand "Not sure if we really need new checks in common code".
> Could you please be more specific. New checks as in pagemap_range() ? Because
> other places it is either replacing erstwhile check_hotplug_memory_addressable()
> or just moving existing checks from platform arch_add_memory() to the beginning
> of various hotplug paths.

The main concern I have with current code is that it makes it impossible
for some driver to detect which ranges it could actually later hotplug.
You cannot warn about a strange setup before you actually run into the
issues while trying to add memory. Like returning "-EINVAL" from a
function but not exposing which values are actually valid.

If we have memhp_get_pluggable_range(), we have such a mechanism and

1. Trying to add out-of-range memory will fail (although deep down in
arch code, but at least it fails).

2. There is a way for drivers to find out which values are actually
valid before triggering 1.

For my use case that's good enough. Do you have others in mind that
require new checks in common code (meaning inside add_memory() and friends)?

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  8:02                   ` David Hildenbrand
@ 2020-12-10  8:58                     ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-10  8:58 UTC (permalink / raw)
  To: David Hildenbrand, Heiko Carstens
  Cc: linux-mm, akpm, catalin.marinas, linux-arm-kernel, linux-s390,
	linux-kernel, Vasily Gorbik, Will Deacon, Ard Biesheuvel,
	Mark Rutland



On 12/10/20 1:32 PM, David Hildenbrand wrote:
> On 10.12.20 08:40, Anshuman Khandual wrote:
>>
>>
>> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>>
>>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>>
>>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>>> paths, which is not optimal.
>>>>>>
>>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>>> my point of view would be clear: let's not have two different ways to
>>>>>> check for the same thing which must be kept in sync.
>>>>>> Therefore I was wondering why this next version is still doing
>>>>>> that. Please find a way to solve this.
>>>>>
>>>>> The following change is after the current series and should work with
>>>>> and without memory hotplug enabled. There will be just a single place
>>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>>> from VMEM_MAX_PHYS to something else later.
>>>>
>>>> Still not. That's way too much code churn for what you want to achieve.
>>>> If the s390 specific patch would look like below you can add
>>>>
>>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>>
>>>> But please make sure that the arch_get_mappable_range() prototype in
>>>> linux/memory_hotplug.h is always visible and does not depend on
>>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>>> because of this.
>>>>
>>>> Thanks.
>>>>
>>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>>> index 77767850d0d0..e0e78234ae57 100644
>>>> --- a/arch/s390/mm/init.c
>>>> +++ b/arch/s390/mm/init.c
>>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>>        return -EINVAL;
>>>>
>>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>>    rc = vmem_add_mapping(start, size);
>>>>    if (rc)
>>>>        return rc;
>>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>>> --- a/arch/s390/mm/vmem.c
>>>> +++ b/arch/s390/mm/vmem.c
>>>> @@ -4,6 +4,7 @@
>>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>>  */
>>>>
>>>> +#include <linux/memory_hotplug.h>
>>>> #include <linux/memblock.h>
>>>> #include <linux/pfn.h>
>>>> #include <linux/mm.h>
>>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>>    mutex_unlock(&vmem_mutex);
>>>> }
>>>>
>>>> +struct range arch_get_mappable_range(void)
>>>> +{
>>>> +    struct range range;
>>>> +
>>>> +    range.start = 0;
>>>> +    range.end = VMEM_MAX_PHYS;
>>>> +    return range;
>>>> +}
>>>> +
>>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>>> {
>>>> +    struct range range;
>>>>    int ret;
>>>>
>>>> -    if (start + size > VMEM_MAX_PHYS ||
>>>> +    range = arch_get_mappable_range();
>>>> +    if (start < range.start ||
>>>> +        start + size > range.end ||
>>>>        start + size < start)
>>>>        return -ERANGE;
>>>>
>>>>
>>>
>>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
>> Didn't quite understand "Not sure if we really need new checks in common code".
>> Could you please be more specific. New checks as in pagemap_range() ? Because
>> other places it is either replacing erstwhile check_hotplug_memory_addressable()
>> or just moving existing checks from platform arch_add_memory() to the beginning
>> of various hotplug paths.
> 
> The main concern I have with current code is that it makes it impossible
> for some driver to detect which ranges it could actually later hotplug.
> You cannot warn about a strange setup before you actually run into the
> issues while trying to add memory. Like returning "-EINVAL" from a
> function but not exposing which values are actually valid.
> 
> If we have memhp_get_pluggable_range(), we have such a mechanism and
> 
> 1. Trying to add out-of-range memory will fail (although deep down in
> arch code, but at least it fails).
> 
> 2. There is a way for drivers to find out which values are actually
> valid before triggering 1.

Right, that is an important use case from a driver perspective as it
helps validate the range being attempted for hotplug, before failing.
But then memhp_range_allowed() also uses the same mechanism i.e
memhp_get_pluggable_range() to unify

1. Generic check_hotplug_memory_addressable()
2. Platform checks in arch_add_memory()

This unified function can be called just at the beginning of memory
hotplug so that both (1) and (2) can be dropped all together. This
is just a logical extension which does improve the memory hotplug
implementation (in itself) by failing earlier and while at it, also
unifying generic and platform specific range constraints. Both the
use cases are orthogonal IMHO.

> 
> For my use case that's good enough. Do you have others in mind that
> require new checks in common code (meaning inside add_memory() and friends)?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  8:58                     ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-10  8:58 UTC (permalink / raw)
  To: David Hildenbrand, Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel



On 12/10/20 1:32 PM, David Hildenbrand wrote:
> On 10.12.20 08:40, Anshuman Khandual wrote:
>>
>>
>> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>>
>>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>>
>>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>>> paths, which is not optimal.
>>>>>>
>>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>>> my point of view would be clear: let's not have two different ways to
>>>>>> check for the same thing which must be kept in sync.
>>>>>> Therefore I was wondering why this next version is still doing
>>>>>> that. Please find a way to solve this.
>>>>>
>>>>> The following change is after the current series and should work with
>>>>> and without memory hotplug enabled. There will be just a single place
>>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>>> from VMEM_MAX_PHYS to something else later.
>>>>
>>>> Still not. That's way too much code churn for what you want to achieve.
>>>> If the s390 specific patch would look like below you can add
>>>>
>>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>>
>>>> But please make sure that the arch_get_mappable_range() prototype in
>>>> linux/memory_hotplug.h is always visible and does not depend on
>>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>>> because of this.
>>>>
>>>> Thanks.
>>>>
>>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>>> index 77767850d0d0..e0e78234ae57 100644
>>>> --- a/arch/s390/mm/init.c
>>>> +++ b/arch/s390/mm/init.c
>>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>>        return -EINVAL;
>>>>
>>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>>    rc = vmem_add_mapping(start, size);
>>>>    if (rc)
>>>>        return rc;
>>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>>> --- a/arch/s390/mm/vmem.c
>>>> +++ b/arch/s390/mm/vmem.c
>>>> @@ -4,6 +4,7 @@
>>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>>  */
>>>>
>>>> +#include <linux/memory_hotplug.h>
>>>> #include <linux/memblock.h>
>>>> #include <linux/pfn.h>
>>>> #include <linux/mm.h>
>>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>>    mutex_unlock(&vmem_mutex);
>>>> }
>>>>
>>>> +struct range arch_get_mappable_range(void)
>>>> +{
>>>> +    struct range range;
>>>> +
>>>> +    range.start = 0;
>>>> +    range.end = VMEM_MAX_PHYS;
>>>> +    return range;
>>>> +}
>>>> +
>>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>>> {
>>>> +    struct range range;
>>>>    int ret;
>>>>
>>>> -    if (start + size > VMEM_MAX_PHYS ||
>>>> +    range = arch_get_mappable_range();
>>>> +    if (start < range.start ||
>>>> +        start + size > range.end ||
>>>>        start + size < start)
>>>>        return -ERANGE;
>>>>
>>>>
>>>
>>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
>> Didn't quite understand "Not sure if we really need new checks in common code".
>> Could you please be more specific. New checks as in pagemap_range() ? Because
>> other places it is either replacing erstwhile check_hotplug_memory_addressable()
>> or just moving existing checks from platform arch_add_memory() to the beginning
>> of various hotplug paths.
> 
> The main concern I have with current code is that it makes it impossible
> for some driver to detect which ranges it could actually later hotplug.
> You cannot warn about a strange setup before you actually run into the
> issues while trying to add memory. Like returning "-EINVAL" from a
> function but not exposing which values are actually valid.
> 
> If we have memhp_get_pluggable_range(), we have such a mechanism and
> 
> 1. Trying to add out-of-range memory will fail (although deep down in
> arch code, but at least it fails).
> 
> 2. There is a way for drivers to find out which values are actually
> valid before triggering 1.

Right, that is an important use case from a driver perspective as it
helps validate the range being attempted for hotplug, before failing.
But then memhp_range_allowed() also uses the same mechanism i.e
memhp_get_pluggable_range() to unify

1. Generic check_hotplug_memory_addressable()
2. Platform checks in arch_add_memory()

This unified function can be called just at the beginning of memory
hotplug so that both (1) and (2) can be dropped all together. This
is just a logical extension which does improve the memory hotplug
implementation (in itself) by failing earlier and while at it, also
unifying generic and platform specific range constraints. Both the
use cases are orthogonal IMHO.

> 
> For my use case that's good enough. Do you have others in mind that
> require new checks in common code (meaning inside add_memory() and friends)?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  8:58                     ` Anshuman Khandual
@ 2020-12-10  9:39                       ` David Hildenbrand
  -1 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-10  9:39 UTC (permalink / raw)
  To: Anshuman Khandual, Heiko Carstens
  Cc: linux-mm, akpm, catalin.marinas, linux-arm-kernel, linux-s390,
	linux-kernel, Vasily Gorbik, Will Deacon, Ard Biesheuvel,
	Mark Rutland

On 10.12.20 09:58, Anshuman Khandual wrote:
> 
> 
> On 12/10/20 1:32 PM, David Hildenbrand wrote:
>> On 10.12.20 08:40, Anshuman Khandual wrote:
>>>
>>>
>>> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>>>
>>>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>>>
>>>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>>>> paths, which is not optimal.
>>>>>>>
>>>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>>>> my point of view would be clear: let's not have two different ways to
>>>>>>> check for the same thing which must be kept in sync.
>>>>>>> Therefore I was wondering why this next version is still doing
>>>>>>> that. Please find a way to solve this.
>>>>>>
>>>>>> The following change is after the current series and should work with
>>>>>> and without memory hotplug enabled. There will be just a single place
>>>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>>>> from VMEM_MAX_PHYS to something else later.
>>>>>
>>>>> Still not. That's way too much code churn for what you want to achieve.
>>>>> If the s390 specific patch would look like below you can add
>>>>>
>>>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>>>
>>>>> But please make sure that the arch_get_mappable_range() prototype in
>>>>> linux/memory_hotplug.h is always visible and does not depend on
>>>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>>>> because of this.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>>>> index 77767850d0d0..e0e78234ae57 100644
>>>>> --- a/arch/s390/mm/init.c
>>>>> +++ b/arch/s390/mm/init.c
>>>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>>>        return -EINVAL;
>>>>>
>>>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>>>    rc = vmem_add_mapping(start, size);
>>>>>    if (rc)
>>>>>        return rc;
>>>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>>>> --- a/arch/s390/mm/vmem.c
>>>>> +++ b/arch/s390/mm/vmem.c
>>>>> @@ -4,6 +4,7 @@
>>>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>>>  */
>>>>>
>>>>> +#include <linux/memory_hotplug.h>
>>>>> #include <linux/memblock.h>
>>>>> #include <linux/pfn.h>
>>>>> #include <linux/mm.h>
>>>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>>>    mutex_unlock(&vmem_mutex);
>>>>> }
>>>>>
>>>>> +struct range arch_get_mappable_range(void)
>>>>> +{
>>>>> +    struct range range;
>>>>> +
>>>>> +    range.start = 0;
>>>>> +    range.end = VMEM_MAX_PHYS;
>>>>> +    return range;
>>>>> +}
>>>>> +
>>>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>>>> {
>>>>> +    struct range range;
>>>>>    int ret;
>>>>>
>>>>> -    if (start + size > VMEM_MAX_PHYS ||
>>>>> +    range = arch_get_mappable_range();
>>>>> +    if (start < range.start ||
>>>>> +        start + size > range.end ||
>>>>>        start + size < start)
>>>>>        return -ERANGE;
>>>>>
>>>>>
>>>>
>>>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
>>> Didn't quite understand "Not sure if we really need new checks in common code".
>>> Could you please be more specific. New checks as in pagemap_range() ? Because
>>> other places it is either replacing erstwhile check_hotplug_memory_addressable()
>>> or just moving existing checks from platform arch_add_memory() to the beginning
>>> of various hotplug paths.
>>
>> The main concern I have with current code is that it makes it impossible
>> for some driver to detect which ranges it could actually later hotplug.
>> You cannot warn about a strange setup before you actually run into the
>> issues while trying to add memory. Like returning "-EINVAL" from a
>> function but not exposing which values are actually valid.
>>
>> If we have memhp_get_pluggable_range(), we have such a mechanism and
>>
>> 1. Trying to add out-of-range memory will fail (although deep down in
>> arch code, but at least it fails).
>>
>> 2. There is a way for drivers to find out which values are actually
>> valid before triggering 1.
> 
> Right, that is an important use case from a driver perspective as it
> helps validate the range being attempted for hotplug, before failing.
> But then memhp_range_allowed() also uses the same mechanism i.e
> memhp_get_pluggable_range() to unify
> 
> 1. Generic check_hotplug_memory_addressable()
> 2. Platform checks in arch_add_memory()
> 
> This unified function can be called just at the beginning of memory
> hotplug so that both (1) and (2) can be dropped all together. This
> is just a logical extension which does improve the memory hotplug
> implementation (in itself) by failing earlier and while at it, also
> unifying generic and platform specific range constraints. Both the
> use cases are orthogonal IMHO.

As longs as it simplifies the code sure. But at least in the s390x case,
we cannot get rid of the internal checks.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-10  9:39                       ` David Hildenbrand
  0 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-10  9:39 UTC (permalink / raw)
  To: Anshuman Khandual, Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On 10.12.20 09:58, Anshuman Khandual wrote:
> 
> 
> On 12/10/20 1:32 PM, David Hildenbrand wrote:
>> On 10.12.20 08:40, Anshuman Khandual wrote:
>>>
>>>
>>> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>>>
>>>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>>>
>>>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>>>> paths, which is not optimal.
>>>>>>>
>>>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>>>> my point of view would be clear: let's not have two different ways to
>>>>>>> check for the same thing which must be kept in sync.
>>>>>>> Therefore I was wondering why this next version is still doing
>>>>>>> that. Please find a way to solve this.
>>>>>>
>>>>>> The following change is after the current series and should work with
>>>>>> and without memory hotplug enabled. There will be just a single place
>>>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>>>> from VMEM_MAX_PHYS to something else later.
>>>>>
>>>>> Still not. That's way too much code churn for what you want to achieve.
>>>>> If the s390 specific patch would look like below you can add
>>>>>
>>>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>>>
>>>>> But please make sure that the arch_get_mappable_range() prototype in
>>>>> linux/memory_hotplug.h is always visible and does not depend on
>>>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>>>> because of this.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>>>> index 77767850d0d0..e0e78234ae57 100644
>>>>> --- a/arch/s390/mm/init.c
>>>>> +++ b/arch/s390/mm/init.c
>>>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>>>        return -EINVAL;
>>>>>
>>>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>>>    rc = vmem_add_mapping(start, size);
>>>>>    if (rc)
>>>>>        return rc;
>>>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>>>> --- a/arch/s390/mm/vmem.c
>>>>> +++ b/arch/s390/mm/vmem.c
>>>>> @@ -4,6 +4,7 @@
>>>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>>>  */
>>>>>
>>>>> +#include <linux/memory_hotplug.h>
>>>>> #include <linux/memblock.h>
>>>>> #include <linux/pfn.h>
>>>>> #include <linux/mm.h>
>>>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>>>    mutex_unlock(&vmem_mutex);
>>>>> }
>>>>>
>>>>> +struct range arch_get_mappable_range(void)
>>>>> +{
>>>>> +    struct range range;
>>>>> +
>>>>> +    range.start = 0;
>>>>> +    range.end = VMEM_MAX_PHYS;
>>>>> +    return range;
>>>>> +}
>>>>> +
>>>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>>>> {
>>>>> +    struct range range;
>>>>>    int ret;
>>>>>
>>>>> -    if (start + size > VMEM_MAX_PHYS ||
>>>>> +    range = arch_get_mappable_range();
>>>>> +    if (start < range.start ||
>>>>> +        start + size > range.end ||
>>>>>        start + size < start)
>>>>>        return -ERANGE;
>>>>>
>>>>>
>>>>
>>>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
>>> Didn't quite understand "Not sure if we really need new checks in common code".
>>> Could you please be more specific. New checks as in pagemap_range() ? Because
>>> other places it is either replacing erstwhile check_hotplug_memory_addressable()
>>> or just moving existing checks from platform arch_add_memory() to the beginning
>>> of various hotplug paths.
>>
>> The main concern I have with current code is that it makes it impossible
>> for some driver to detect which ranges it could actually later hotplug.
>> You cannot warn about a strange setup before you actually run into the
>> issues while trying to add memory. Like returning "-EINVAL" from a
>> function but not exposing which values are actually valid.
>>
>> If we have memhp_get_pluggable_range(), we have such a mechanism and
>>
>> 1. Trying to add out-of-range memory will fail (although deep down in
>> arch code, but at least it fails).
>>
>> 2. There is a way for drivers to find out which values are actually
>> valid before triggering 1.
> 
> Right, that is an important use case from a driver perspective as it
> helps validate the range being attempted for hotplug, before failing.
> But then memhp_range_allowed() also uses the same mechanism i.e
> memhp_get_pluggable_range() to unify
> 
> 1. Generic check_hotplug_memory_addressable()
> 2. Platform checks in arch_add_memory()
> 
> This unified function can be called just at the beginning of memory
> hotplug so that both (1) and (2) can be dropped all together. This
> is just a logical extension which does improve the memory hotplug
> implementation (in itself) by failing earlier and while at it, also
> unifying generic and platform specific range constraints. Both the
> use cases are orthogonal IMHO.

As longs as it simplifies the code sure. But at least in the s390x case,
we cannot get rid of the internal checks.

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-10  7:04               ` David Hildenbrand
@ 2020-12-17 11:45                 ` Anshuman Khandual
  -1 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-17 11:45 UTC (permalink / raw)
  To: David Hildenbrand, Heiko Carstens
  Cc: linux-mm, akpm, catalin.marinas, linux-arm-kernel, linux-s390,
	linux-kernel, Vasily Gorbik, Will Deacon, Ard Biesheuvel,
	Mark Rutland



On 12/10/20 12:34 PM, David Hildenbrand wrote:
> 
>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>
>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>> paths, which is not optimal.
>>>>
>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>> my point of view would be clear: let's not have two different ways to
>>>> check for the same thing which must be kept in sync.
>>>> Therefore I was wondering why this next version is still doing
>>>> that. Please find a way to solve this.
>>>
>>> The following change is after the current series and should work with
>>> and without memory hotplug enabled. There will be just a single place
>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>> from VMEM_MAX_PHYS to something else later.
>>
>> Still not. That's way too much code churn for what you want to achieve.
>> If the s390 specific patch would look like below you can add
>>
>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>
>> But please make sure that the arch_get_mappable_range() prototype in
>> linux/memory_hotplug.h is always visible and does not depend on
>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>> because of this.
>>
>> Thanks.
>>
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index 77767850d0d0..e0e78234ae57 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>        return -EINVAL;
>>
>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>    rc = vmem_add_mapping(start, size);
>>    if (rc)
>>        return rc;
>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>> index b239f2ba93b0..ccd55e2f97f9 100644
>> --- a/arch/s390/mm/vmem.c
>> +++ b/arch/s390/mm/vmem.c
>> @@ -4,6 +4,7 @@
>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>  */
>>
>> +#include <linux/memory_hotplug.h>
>> #include <linux/memblock.h>
>> #include <linux/pfn.h>
>> #include <linux/mm.h>
>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>    mutex_unlock(&vmem_mutex);
>> }
>>
>> +struct range arch_get_mappable_range(void)
>> +{
>> +    struct range range;
>> +
>> +    range.start = 0;
>> +    range.end = VMEM_MAX_PHYS;
>> +    return range;
>> +}
>> +
>> int vmem_add_mapping(unsigned long start, unsigned long size)
>> {
>> +    struct range range;
>>    int ret;
>>
>> -    if (start + size > VMEM_MAX_PHYS ||
>> +    range = arch_get_mappable_range();
>> +    if (start < range.start ||
>> +        start + size > range.end ||
>>        start + size < start)
>>        return -ERANGE;
>>
>>
> 
> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).

Hello David,

Quick question. Currently memhp_get_pluggable_range() is a mm/memory_hotplug.c
internal static inline function. Only memhp_range_allowed() is available via
the header include/linux/memory_hotplug.h But For memhp_get_pluggable_range()
to be visible to the drivers, it needs to get included in the header and also
be exported via EXPORT_SYMBOL_GPL() in mm/memory_hotplug.c OR just move the
entire definition as static inline into the header itself. Wondering which way
would be better ?

- Anshuman

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-17 11:45                 ` Anshuman Khandual
  0 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2020-12-17 11:45 UTC (permalink / raw)
  To: David Hildenbrand, Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel



On 12/10/20 12:34 PM, David Hildenbrand wrote:
> 
>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>
>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>> paths, which is not optimal.
>>>>
>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>> my point of view would be clear: let's not have two different ways to
>>>> check for the same thing which must be kept in sync.
>>>> Therefore I was wondering why this next version is still doing
>>>> that. Please find a way to solve this.
>>>
>>> The following change is after the current series and should work with
>>> and without memory hotplug enabled. There will be just a single place
>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>> from VMEM_MAX_PHYS to something else later.
>>
>> Still not. That's way too much code churn for what you want to achieve.
>> If the s390 specific patch would look like below you can add
>>
>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>
>> But please make sure that the arch_get_mappable_range() prototype in
>> linux/memory_hotplug.h is always visible and does not depend on
>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>> because of this.
>>
>> Thanks.
>>
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index 77767850d0d0..e0e78234ae57 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>        return -EINVAL;
>>
>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>    rc = vmem_add_mapping(start, size);
>>    if (rc)
>>        return rc;
>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>> index b239f2ba93b0..ccd55e2f97f9 100644
>> --- a/arch/s390/mm/vmem.c
>> +++ b/arch/s390/mm/vmem.c
>> @@ -4,6 +4,7 @@
>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>  */
>>
>> +#include <linux/memory_hotplug.h>
>> #include <linux/memblock.h>
>> #include <linux/pfn.h>
>> #include <linux/mm.h>
>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>    mutex_unlock(&vmem_mutex);
>> }
>>
>> +struct range arch_get_mappable_range(void)
>> +{
>> +    struct range range;
>> +
>> +    range.start = 0;
>> +    range.end = VMEM_MAX_PHYS;
>> +    return range;
>> +}
>> +
>> int vmem_add_mapping(unsigned long start, unsigned long size)
>> {
>> +    struct range range;
>>    int ret;
>>
>> -    if (start + size > VMEM_MAX_PHYS ||
>> +    range = arch_get_mappable_range();
>> +    if (start < range.start ||
>> +        start + size > range.end ||
>>        start + size < start)
>>        return -ERANGE;
>>
>>
> 
> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).

Hello David,

Quick question. Currently memhp_get_pluggable_range() is a mm/memory_hotplug.c
internal static inline function. Only memhp_range_allowed() is available via
the header include/linux/memory_hotplug.h But For memhp_get_pluggable_range()
to be visible to the drivers, it needs to get included in the header and also
be exported via EXPORT_SYMBOL_GPL() in mm/memory_hotplug.c OR just move the
entire definition as static inline into the header itself. Wondering which way
would be better ?

- Anshuman

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
  2020-12-17 11:45                 ` Anshuman Khandual
@ 2020-12-17 12:18                   ` David Hildenbrand
  -1 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-17 12:18 UTC (permalink / raw)
  To: Anshuman Khandual, Heiko Carstens
  Cc: linux-mm, akpm, catalin.marinas, linux-arm-kernel, linux-s390,
	linux-kernel, Vasily Gorbik, Will Deacon, Ard Biesheuvel,
	Mark Rutland

On 17.12.20 12:45, Anshuman Khandual wrote:
> 
> 
> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>
>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>
>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>> paths, which is not optimal.
>>>>>
>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>> my point of view would be clear: let's not have two different ways to
>>>>> check for the same thing which must be kept in sync.
>>>>> Therefore I was wondering why this next version is still doing
>>>>> that. Please find a way to solve this.
>>>>
>>>> The following change is after the current series and should work with
>>>> and without memory hotplug enabled. There will be just a single place
>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>> from VMEM_MAX_PHYS to something else later.
>>>
>>> Still not. That's way too much code churn for what you want to achieve.
>>> If the s390 specific patch would look like below you can add
>>>
>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>
>>> But please make sure that the arch_get_mappable_range() prototype in
>>> linux/memory_hotplug.h is always visible and does not depend on
>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>> because of this.
>>>
>>> Thanks.
>>>
>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>> index 77767850d0d0..e0e78234ae57 100644
>>> --- a/arch/s390/mm/init.c
>>> +++ b/arch/s390/mm/init.c
>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>        return -EINVAL;
>>>
>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>    rc = vmem_add_mapping(start, size);
>>>    if (rc)
>>>        return rc;
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -4,6 +4,7 @@
>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>  */
>>>
>>> +#include <linux/memory_hotplug.h>
>>> #include <linux/memblock.h>
>>> #include <linux/pfn.h>
>>> #include <linux/mm.h>
>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>    mutex_unlock(&vmem_mutex);
>>> }
>>>
>>> +struct range arch_get_mappable_range(void)
>>> +{
>>> +    struct range range;
>>> +
>>> +    range.start = 0;
>>> +    range.end = VMEM_MAX_PHYS;
>>> +    return range;
>>> +}
>>> +
>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>> {
>>> +    struct range range;
>>>    int ret;
>>>
>>> -    if (start + size > VMEM_MAX_PHYS ||
>>> +    range = arch_get_mappable_range();
>>> +    if (start < range.start ||
>>> +        start + size > range.end ||
>>>        start + size < start)
>>>        return -ERANGE;
>>>
>>>
>>
>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
> 
> Hello David,
> 
> Quick question. Currently memhp_get_pluggable_range() is a mm/memory_hotplug.c
> internal static inline function. Only memhp_range_allowed() is available via
> the header include/linux/memory_hotplug.h But For memhp_get_pluggable_range()
> to be visible to the drivers, it needs to get included in the header and also
> be exported via EXPORT_SYMBOL_GPL() in mm/memory_hotplug.c OR just move the
> entire definition as static inline into the header itself. Wondering which way
> would be better ?

As it's most likely not on any hot path, exporting the symbol might be
the cleanest approach.

> 
> - Anshuman
> 


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] s390/mm: Define arch_get_mappable_range()
@ 2020-12-17 12:18                   ` David Hildenbrand
  0 siblings, 0 replies; 32+ messages in thread
From: David Hildenbrand @ 2020-12-17 12:18 UTC (permalink / raw)
  To: Anshuman Khandual, Heiko Carstens
  Cc: Mark Rutland, linux-s390, Vasily Gorbik, catalin.marinas,
	linux-kernel, linux-mm, akpm, Will Deacon, Ard Biesheuvel,
	linux-arm-kernel

On 17.12.20 12:45, Anshuman Khandual wrote:
> 
> 
> On 12/10/20 12:34 PM, David Hildenbrand wrote:
>>
>>> Am 10.12.2020 um 07:58 schrieb Heiko Carstens <hca@linux.ibm.com>:
>>>
>>> On Thu, Dec 10, 2020 at 09:48:11AM +0530, Anshuman Khandual wrote:
>>>>>> Alternatively leaving __segment_load() and vmem_add_memory() unchanged
>>>>>> will create three range checks i.e two memhp_range_allowed() and the
>>>>>> existing VMEM_MAX_PHYS check in vmem_add_mapping() on all the hotplug
>>>>>> paths, which is not optimal.
>>>>>
>>>>> Ah, sorry. I didn't follow this discussion too closely. I just thought
>>>>> my point of view would be clear: let's not have two different ways to
>>>>> check for the same thing which must be kept in sync.
>>>>> Therefore I was wondering why this next version is still doing
>>>>> that. Please find a way to solve this.
>>>>
>>>> The following change is after the current series and should work with
>>>> and without memory hotplug enabled. There will be just a single place
>>>> i.e vmem_get_max_addr() to update in case the maximum address changes
>>>> from VMEM_MAX_PHYS to something else later.
>>>
>>> Still not. That's way too much code churn for what you want to achieve.
>>> If the s390 specific patch would look like below you can add
>>>
>>> Acked-by: Heiko Carstens <hca@linux.ibm.com>
>>>
>>> But please make sure that the arch_get_mappable_range() prototype in
>>> linux/memory_hotplug.h is always visible and does not depend on
>>> CONFIG_MEMORY_HOTPLUG. I'd like to avoid seeing sparse warnings
>>> because of this.
>>>
>>> Thanks.
>>>
>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>> index 77767850d0d0..e0e78234ae57 100644
>>> --- a/arch/s390/mm/init.c
>>> +++ b/arch/s390/mm/init.c
>>> @@ -291,6 +291,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>>>    if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
>>>        return -EINVAL;
>>>
>>> +    VM_BUG_ON(!memhp_range_allowed(start, size, 1));
>>>    rc = vmem_add_mapping(start, size);
>>>    if (rc)
>>>        return rc;
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index b239f2ba93b0..ccd55e2f97f9 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -4,6 +4,7 @@
>>>  *    Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
>>>  */
>>>
>>> +#include <linux/memory_hotplug.h>
>>> #include <linux/memblock.h>
>>> #include <linux/pfn.h>
>>> #include <linux/mm.h>
>>> @@ -532,11 +533,23 @@ void vmem_remove_mapping(unsigned long start, unsigned long size)
>>>    mutex_unlock(&vmem_mutex);
>>> }
>>>
>>> +struct range arch_get_mappable_range(void)
>>> +{
>>> +    struct range range;
>>> +
>>> +    range.start = 0;
>>> +    range.end = VMEM_MAX_PHYS;
>>> +    return range;
>>> +}
>>> +
>>> int vmem_add_mapping(unsigned long start, unsigned long size)
>>> {
>>> +    struct range range;
>>>    int ret;
>>>
>>> -    if (start + size > VMEM_MAX_PHYS ||
>>> +    range = arch_get_mappable_range();
>>> +    if (start < range.start ||
>>> +        start + size > range.end ||
>>>        start + size < start)
>>>        return -ERANGE;
>>>
>>>
>>
>> Right, what I had in mind as reply to v1. Not sure if we really need new checks in common code. Having a new memhp_get_pluggable_range() would be sufficient for my use case (virtio-mem).
> 
> Hello David,
> 
> Quick question. Currently memhp_get_pluggable_range() is a mm/memory_hotplug.c
> internal static inline function. Only memhp_range_allowed() is available via
> the header include/linux/memory_hotplug.h But For memhp_get_pluggable_range()
> to be visible to the drivers, it needs to get included in the header and also
> be exported via EXPORT_SYMBOL_GPL() in mm/memory_hotplug.c OR just move the
> entire definition as static inline into the header itself. Wondering which way
> would be better ?

As it's most likely not on any hot path, exporting the symbol might be
the cleanest approach.

> 
> - Anshuman
> 


-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2020-12-17 12:20 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-08  4:16 [PATCH 0/3] mm/hotplug: Pre-validate the address range with platform Anshuman Khandual
2020-12-08  4:16 ` Anshuman Khandual
2020-12-08  4:16 ` [PATCH 1/3] mm/hotplug: Prevalidate the address range being added " Anshuman Khandual
2020-12-08  4:16   ` Anshuman Khandual
2020-12-08  4:16 ` [PATCH 2/3] arm64/mm: Define arch_get_mappable_range() Anshuman Khandual
2020-12-08  4:16   ` Anshuman Khandual
2020-12-08  4:16 ` [PATCH 3/3] s390/mm: " Anshuman Khandual
2020-12-08  4:16   ` Anshuman Khandual
2020-12-08 15:27   ` Heiko Carstens
2020-12-08 15:27     ` Heiko Carstens
2020-12-09  2:37     ` Anshuman Khandual
2020-12-09  2:37       ` Anshuman Khandual
2020-12-09 14:57       ` Heiko Carstens
2020-12-09 14:57         ` Heiko Carstens
2020-12-10  4:18         ` Anshuman Khandual
2020-12-10  4:18           ` Anshuman Khandual
2020-12-10  6:58           ` Heiko Carstens
2020-12-10  6:58             ` Heiko Carstens
2020-12-10  7:04             ` David Hildenbrand
2020-12-10  7:04               ` David Hildenbrand
2020-12-10  7:40               ` Anshuman Khandual
2020-12-10  7:40                 ` Anshuman Khandual
2020-12-10  8:02                 ` David Hildenbrand
2020-12-10  8:02                   ` David Hildenbrand
2020-12-10  8:58                   ` Anshuman Khandual
2020-12-10  8:58                     ` Anshuman Khandual
2020-12-10  9:39                     ` David Hildenbrand
2020-12-10  9:39                       ` David Hildenbrand
2020-12-17 11:45               ` Anshuman Khandual
2020-12-17 11:45                 ` Anshuman Khandual
2020-12-17 12:18                 ` David Hildenbrand
2020-12-17 12:18                   ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.