All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange
@ 2021-12-15 14:52 Ard Biesheuvel
  2021-12-15 14:52 ` [PATCH 1/2] arm64: simplify rules for defining ARM64_MEMSTART_ALIGN Ard Biesheuvel
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2021-12-15 14:52 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will, mark.rutland, Ard Biesheuvel, Kefeng Wang

Kefeng reports in [0] that using PArange to size the randomized linear
region offset leads to cases where randomization is no longer possible
even if the actual placement of DRAM in memory would otherwise have
permitted it.

Instead of using CONFIG_MEMORY_HOTPLUG to choose at build time between
to different behaviors in this regard, let's try addressing this by
reducing the minimum relative aligment between VA and PA in the linear
region, and taking advantage of the space at the base of physical memory
below the first memblock to permit some randomization of the placement
of physical DRAM in the virtual address map.

Cc: Kefeng Wang <wangkefeng.wang@huawei.com>

[0] https://lore.kernel.org/linux-arm-kernel/20211104062747.55206-1-wangkefeng.wang@huawei.com/

Ard Biesheuvel (2):
  arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
  arm64: kaslr: take free space at start of DRAM into account

 arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
 arch/arm64/mm/init.c                    |  3 ++-
 2 files changed, 6 insertions(+), 24 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
  2021-12-15 14:52 [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Ard Biesheuvel
@ 2021-12-15 14:52 ` Ard Biesheuvel
  2021-12-15 14:52 ` [PATCH 2/2] arm64: kaslr: take free space at start of DRAM into account Ard Biesheuvel
  2021-12-16  7:37 ` [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Kefeng Wang
  2 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2021-12-15 14:52 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will, mark.rutland, Ard Biesheuvel, Kefeng Wang

ARM64_MEMSTART_ALIGN defines the minimum alignment of the translation
between virtual and physical addresses, so that data structures dealing
with physical addresses (such as the vmmemmap struct page array) appear
sufficiently aligned in memory.

We currently increase this value artificially to a 'better' value based
on the assumption that being able to use larger block mappings is
preferable, even though we rarely do so now that rodata=full is the
default.

So let's simplify this, and always define ARM64_MEMSTART_ALIGN in terms
of the vmemmap section size.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
 1 file changed, 4 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 96dc0f7da258..505ae0d560e6 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -113,30 +113,11 @@
 #endif
 
 /*
- * To make optimal use of block mappings when laying out the linear
- * mapping, round down the base of physical memory to a size that can
- * be mapped efficiently, i.e., either PUD_SIZE (4k granule) or PMD_SIZE
- * (64k granule), or a multiple that can be mapped using contiguous bits
- * in the page tables: 32 * PMD_SIZE (16k granule)
+ * The MM code assumes that struct page arrays belonging to a vmmemmap section
+ * appear naturally aligned in memory. This implies that the minimum relative
+ * alignment between virtual and physical addresses in the linear region must
+ * equal the section size.
  */
-#if defined(CONFIG_ARM64_4K_PAGES)
-#define ARM64_MEMSTART_SHIFT		PUD_SHIFT
-#elif defined(CONFIG_ARM64_16K_PAGES)
-#define ARM64_MEMSTART_SHIFT		CONT_PMD_SHIFT
-#else
-#define ARM64_MEMSTART_SHIFT		PMD_SHIFT
-#endif
-
-/*
- * sparsemem vmemmap imposes an additional requirement on the alignment of
- * memstart_addr, due to the fact that the base of the vmemmap region
- * has a direct correspondence, and needs to appear sufficiently aligned
- * in the virtual address space.
- */
-#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
 #define ARM64_MEMSTART_ALIGN	(1UL << SECTION_SIZE_BITS)
-#else
-#define ARM64_MEMSTART_ALIGN	(1UL << ARM64_MEMSTART_SHIFT)
-#endif
 
 #endif	/* __ASM_KERNEL_PGTABLE_H */
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] arm64: kaslr: take free space at start of DRAM into account
  2021-12-15 14:52 [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Ard Biesheuvel
  2021-12-15 14:52 ` [PATCH 1/2] arm64: simplify rules for defining ARM64_MEMSTART_ALIGN Ard Biesheuvel
@ 2021-12-15 14:52 ` Ard Biesheuvel
  2021-12-16  7:37 ` [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Kefeng Wang
  2 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2021-12-15 14:52 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, will, mark.rutland, Ard Biesheuvel, Kefeng Wang

Commit 97d6786e0669 ("arm64: mm: account for hotplug memory when
randomizing the linear region") limited the randomization range of the
linear region substantially, or even eliminated it entirely for
configurations where the VA range equals or exceeds the maximum PA
range, even in cases where most of the PA range is not occupied to begin
with.

In such cases, we can recover this ability to some extent by taking
advantage of the reduced value of ARM64_MEMSTART_ALIGN, and disregarding
the physical region below the first memblock, allowing us to randomize
the placement of physical DRAM within the linear region even in cases
where the PArange equals the virtual range.

NOTE: this relies on the assumption that hotpluggable memory will never
appear below the lowest boot-time memblock memory region, but only
above.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/mm/init.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index a8834434af99..b3ffb356bc8b 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -288,7 +288,8 @@ void __init arm64_memblock_init(void)
 		int parange = cpuid_feature_extract_unsigned_field(
 					mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
 		s64 range = linear_region_size -
-			    BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
+			    BIT(id_aa64mmfr0_parange_to_phys_shift(parange)) +
+			    memblock_start_of_DRAM();
 
 		/*
 		 * If the size of the linear region exceeds, by a sufficient
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange
  2021-12-15 14:52 [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Ard Biesheuvel
  2021-12-15 14:52 ` [PATCH 1/2] arm64: simplify rules for defining ARM64_MEMSTART_ALIGN Ard Biesheuvel
  2021-12-15 14:52 ` [PATCH 2/2] arm64: kaslr: take free space at start of DRAM into account Ard Biesheuvel
@ 2021-12-16  7:37 ` Kefeng Wang
  2021-12-16  8:56   ` Ard Biesheuvel
  2 siblings, 1 reply; 7+ messages in thread
From: Kefeng Wang @ 2021-12-16  7:37 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel; +Cc: catalin.marinas, will, mark.rutland


On 2021/12/15 22:52, Ard Biesheuvel wrote:
> Kefeng reports in [0] that using PArange to size the randomized linear
> region offset leads to cases where randomization is no longer possible
> even if the actual placement of DRAM in memory would otherwise have
> permitted it.
>
> Instead of using CONFIG_MEMORY_HOTPLUG to choose at build time between
> to different behaviors in this regard, let's try addressing this by
> reducing the minimum relative aligment between VA and PA in the linear
> region, and taking advantage of the space at the base of physical memory
> below the first memblock to permit some randomization of the placement
> of physical DRAM in the virtual address map.
VArange == PArange is ok, but our case is Va=39/Pa=48, this is still not 
works :(

Could we add a way(maybe cmdline) to set max parange, then we could make

randomization works, or some other way?


> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> [0] https://lore.kernel.org/linux-arm-kernel/20211104062747.55206-1-wangkefeng.wang@huawei.com/
>
> Ard Biesheuvel (2):
>    arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
>    arm64: kaslr: take free space at start of DRAM into account
>
>   arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
>   arch/arm64/mm/init.c                    |  3 ++-
>   2 files changed, 6 insertions(+), 24 deletions(-)
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange
  2021-12-16  7:37 ` [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Kefeng Wang
@ 2021-12-16  8:56   ` Ard Biesheuvel
  2021-12-16 11:32     ` Kefeng Wang
  2022-02-15  2:09     ` Kefeng Wang
  0 siblings, 2 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2021-12-16  8:56 UTC (permalink / raw)
  To: Kefeng Wang, Marc Zyngier
  Cc: Linux ARM, Catalin Marinas, Will Deacon, Mark Rutland

(+ Marc)

On Thu, 16 Dec 2021 at 08:37, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
>
> On 2021/12/15 22:52, Ard Biesheuvel wrote:
> > Kefeng reports in [0] that using PArange to size the randomized linear
> > region offset leads to cases where randomization is no longer possible
> > even if the actual placement of DRAM in memory would otherwise have
> > permitted it.
> >
> > Instead of using CONFIG_MEMORY_HOTPLUG to choose at build time between
> > to different behaviors in this regard, let's try addressing this by
> > reducing the minimum relative aligment between VA and PA in the linear
> > region, and taking advantage of the space at the base of physical memory
> > below the first memblock to permit some randomization of the placement
> > of physical DRAM in the virtual address map.
> VArange == PArange is ok, but our case is Va=39/Pa=48, this is still not
> works :(
>
> Could we add a way(maybe cmdline) to set max parange, then we could make
>
> randomization works, or some other way?
>

We could, but it is not a very elegant way to recover this
randomization range. You would need to reduce the PArange to 36 bits
(which is the next valid option below 40) in order to ensure that a
39-bit VA kernel has some room for randomization, but this would not
work on many systems because they require 40-bit physical addressing,
due to the placement of DRAM in the PA space, not the DRAM size.

Android 5.10 is in the same boat (and needs CONFIG_MEMORY_HOTPLUG=y)
so I agree we need something better here.



>
> > Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
> >
> > [0] https://lore.kernel.org/linux-arm-kernel/20211104062747.55206-1-wangkefeng.wang@huawei.com/
> >
> > Ard Biesheuvel (2):
> >    arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
> >    arm64: kaslr: take free space at start of DRAM into account
> >
> >   arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
> >   arch/arm64/mm/init.c                    |  3 ++-
> >   2 files changed, 6 insertions(+), 24 deletions(-)
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange
  2021-12-16  8:56   ` Ard Biesheuvel
@ 2021-12-16 11:32     ` Kefeng Wang
  2022-02-15  2:09     ` Kefeng Wang
  1 sibling, 0 replies; 7+ messages in thread
From: Kefeng Wang @ 2021-12-16 11:32 UTC (permalink / raw)
  To: Ard Biesheuvel, Marc Zyngier
  Cc: Linux ARM, Catalin Marinas, Will Deacon, Mark Rutland


On 2021/12/16 16:56, Ard Biesheuvel wrote:
> (+ Marc)
>
> On Thu, 16 Dec 2021 at 08:37, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>> On 2021/12/15 22:52, Ard Biesheuvel wrote:
>>> Kefeng reports in [0] that using PArange to size the randomized linear
>>> region offset leads to cases where randomization is no longer possible
>>> even if the actual placement of DRAM in memory would otherwise have
>>> permitted it.
>>>
>>> Instead of using CONFIG_MEMORY_HOTPLUG to choose at build time between
>>> to different behaviors in this regard, let's try addressing this by
>>> reducing the minimum relative aligment between VA and PA in the linear
>>> region, and taking advantage of the space at the base of physical memory
>>> below the first memblock to permit some randomization of the placement
>>> of physical DRAM in the virtual address map.
>> VArange == PArange is ok, but our case is Va=39/Pa=48, this is still not
>> works :(
>>
>> Could we add a way(maybe cmdline) to set max parange, then we could make
>>
>> randomization works, or some other way?
>>
> We could, but it is not a very elegant way to recover this
> randomization range. You would need to reduce the PArange to 36 bits
> (which is the next valid option below 40) in order to ensure that a
> 39-bit VA kernel has some room for randomization, but this would not
> work on many systems because they require 40-bit physical addressing,
> due to the placement of DRAM in the PA space, not the DRAM size.
Yes, cmdline is not elegant, we can't find a better way to fix this.
> Android 5.10 is in the same boat (and needs CONFIG_MEMORY_HOTPLUG=y)
> so I agree we need something better here.

It's not only Android, some embedded system with not too much memory, they

need KASLR/MEMORY_HOTPLUG.


>
>
>
>>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
>>> [0] https://lore.kernel.org/linux-arm-kernel/20211104062747.55206-1-wangkefeng.wang@huawei.com/
>>>
>>> Ard Biesheuvel (2):
>>>     arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
>>>     arm64: kaslr: take free space at start of DRAM into account
>>>
>>>    arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
>>>    arch/arm64/mm/init.c                    |  3 ++-
>>>    2 files changed, 6 insertions(+), 24 deletions(-)
>>>
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange
  2021-12-16  8:56   ` Ard Biesheuvel
  2021-12-16 11:32     ` Kefeng Wang
@ 2022-02-15  2:09     ` Kefeng Wang
  1 sibling, 0 replies; 7+ messages in thread
From: Kefeng Wang @ 2022-02-15  2:09 UTC (permalink / raw)
  To: Ard Biesheuvel, Marc Zyngier
  Cc: Linux ARM, Catalin Marinas, Will Deacon, Mark Rutland, Kefeng Wang


On 2021/12/16 16:56, Ard Biesheuvel wrote:
> (+ Marc)
>
> On Thu, 16 Dec 2021 at 08:37, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>
>> On 2021/12/15 22:52, Ard Biesheuvel wrote:
>>> Kefeng reports in [0] that using PArange to size the randomized linear
>>> region offset leads to cases where randomization is no longer possible
>>> even if the actual placement of DRAM in memory would otherwise have
>>> permitted it.
>>>
>>> Instead of using CONFIG_MEMORY_HOTPLUG to choose at build time between
>>> to different behaviors in this regard, let's try addressing this by
>>> reducing the minimum relative aligment between VA and PA in the linear
>>> region, and taking advantage of the space at the base of physical memory
>>> below the first memblock to permit some randomization of the placement
>>> of physical DRAM in the virtual address map.
>> VArange == PArange is ok, but our case is Va=39/Pa=48, this is still not
>> works :(
>>
>> Could we add a way(maybe cmdline) to set max parange, then we could make
>>
>> randomization works, or some other way?
>>
> We could, but it is not a very elegant way to recover this
> randomization range. You would need to reduce the PArange to 36 bits
> (which is the next valid option below 40) in order to ensure that a
> 39-bit VA kernel has some room for randomization, but this would not
> work on many systems because they require 40-bit physical addressing,
> due to the placement of DRAM in the PA space, not the DRAM size.
>
> Android 5.10 is in the same boat (and needs CONFIG_MEMORY_HOTPLUG=y)
> so I agree we need something better here.
>
>
Could we reuse "linux,usable-memory-range" property?

For now,this property is only used to determine available memory for 
crash dump kernel.

For first kernel, we could this property to determine all available 
physical memory,

including hotplug memory range, also we must make sure that the range is 
reasonable in the fdt.


Here is the draft, how about this way?


diff --git a/Documentation/devicetree/bindings/chosen.txt 
b/Documentation/devicetree/bindings/chosen.txt
index 1cc3aa10dcb1..18ab9046dcd0 100644
--- a/Documentation/devicetree/bindings/chosen.txt
+++ b/Documentation/devicetree/bindings/chosen.txt
@@ -99,8 +99,12 @@ The main usage is for crash dump kernel to identify 
its own usable
  memory and exclude, at its boot time, any other memory areas that are
  part of the panicked kernel's memory.

-While this property does not represent a real hardware, the address
-and the size are expressed in #address-cells and #size-cells,
+When it is used for first kernel(arm64 only, option), it must contains all
+the physical memory range, including the hotplug memory, the range will
+be used for calculating the max randomization range of the linear region
+if CONFIG_RANDOMIZE_BASE enabled on arm64.
+
+The address and the size are expressed in #address-cells and #size-cells,
  respectively, of the root node.

  linux,elfcorehdr
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index db63cc885771..a8f7d619550b 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -189,6 +189,40 @@ static int __init early_mem(char *p)
  }
  early_param("mem", early_mem);

+static void arm64_randomize_linear_region_setup(void)
+{
+    s64 range, usable_start, usable_size, mmfr0;
+    extern u16 memstart_offset_seed;
+    int parange;
+
+    if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE) || memstart_offset_seed == 0)
+        return;
+
+    mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+    parange = cpuid_feature_extract_unsigned_field(
+                mfr0, ID_AA64MMFR0_PARANGE_SHIFT);
+    range = BIT(id_aa64mmfr0_parange_to_phys_shift(parange))
+
+    of_get_usable_mem_range(&usable_start, &usable_size);
+    if (!usable_size || usable_start + usable_size > range)
+        usable_size = range;
+
+    range = linear_region_size - usable_size;
+
+    /*
+     * If the size of the linear region exceeds, by a sufficient
+     * margin, the size of the region that the physical memory can
+     * span, randomize the linear region as well.
+     */
+    if (range >= (s64)ARM64_MEMSTART_ALIGN) {
+        range /= ARM64_MEMSTART_ALIGN;
+        memstart_addr -= ARM64_MEMSTART_ALIGN *
+                 ((range * memstart_offset_seed) >> 16);
+    } else {
+        pr_warn("linear mappings size is too small for KASLR\n");
+    }
+}
+
  void __init arm64_memblock_init(void)
  {
      s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);
@@ -282,25 +316,7 @@ void __init arm64_memblock_init(void)
          }
      }

-    if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
-        extern u16 memstart_offset_seed;
-        u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
-        int parange = cpuid_feature_extract_unsigned_field(
-                    mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
-        s64 range = linear_region_size -
-                BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
-
-        /*
-         * If the size of the linear region exceeds, by a sufficient
-         * margin, the size of the region that the physical memory can
-         * span, randomize the linear region as well.
-         */
-        if (memstart_offset_seed > 0 && range >= 
(s64)ARM64_MEMSTART_ALIGN) {
-            range /= ARM64_MEMSTART_ALIGN;
-            memstart_addr -= ARM64_MEMSTART_ALIGN *
-                     ((range * memstart_offset_seed) >> 16);
-        }
-    }
+    arm64_randomize_linear_region_setup();

      /*
       * Register the kernel text, kernel data, initrd, and initial
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index acfae9b41cc8..0a53ff9d5766 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1450,6 +1450,11 @@ struct range arch_get_mappable_range(void)
      u64 end_linear_pa = __pa(PAGE_END - 1);

      if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+        u64 usable_start, usable_size;
+        of_get_usable_mem_size(&usable_start, &usable_size);
+        if (usable_size)
+            end_linear_pa = min(end_linear_pa, usable_start + usable_size);
+
          /*
           * Check for a wrap, it is possible because of randomized linear
           * mapping the start physical address is actually bigger than
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index ad85ff6474ff..191011912ced 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -972,6 +972,14 @@ static void __init 
early_init_dt_check_for_elfcorehdr(unsigned long node)
  }

  static unsigned long chosen_node_offset = -FDT_ERR_NOTFOUND;
+static phys_addr_t cap_mem_addr __ro_after_init;
+static phys_addr_t cap_mem_size __ro_after_init;
+
+void of_get_usable_mem_range(phys_addr_t *usable_start, phys_addr_t 
*usable_size)
+{
+    *usable_start = cap_mem_addr;
+    *usable_size = cap_mem_size;
+}

  /**
   * early_init_dt_check_for_usable_mem_range - Decode usable memory range
@@ -981,8 +989,6 @@ void __init 
early_init_dt_check_for_usable_mem_range(void)
  {
      const __be32 *prop;
      int len;
-    phys_addr_t cap_mem_addr;
-    phys_addr_t cap_mem_size;
      unsigned long node = chosen_node_offset;

      if ((long)node < 0)
diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
index d69ad5bb1eb1..be9b9e5a693f 100644
--- a/include/linux/of_fdt.h
+++ b/include/linux/of_fdt.h
@@ -83,6 +83,7 @@ extern void unflatten_device_tree(void);
  extern void unflatten_and_copy_device_tree(void);
  extern void early_init_devtree(void *);
  extern void early_get_first_memblock_info(void *, phys_addr_t *);
+extern void of_get_usable_mem_range(phys_addr_t *usable_start, 
phys_addr_t *usable_size);
  #else /* CONFIG_OF_EARLY_FLATTREE */
  static inline void early_init_dt_check_for_usable_mem_range(void) {}
  static inline int early_init_dt_scan_chosen_stdout(void) { return 
-ENODEV; }
@@ -91,6 +92,7 @@ static inline void early_init_fdt_reserve_self(void) {}
  static inline const char *of_flat_dt_get_machine_name(void) { return 
NULL; }
  static inline void unflatten_device_tree(void) {}
  static inline void unflatten_and_copy_device_tree(void) {}
+extern void of_get_usable_mem_range(phys_addr_t *usable_start, 
phys_addr_t *usable_size) {}
  #endif /* CONFIG_OF_EARLY_FLATTREE */

  #endif /* __ASSEMBLY__ */


>>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
>>> [0] https://lore.kernel.org/linux-arm-kernel/20211104062747.55206-1-wangkefeng.wang@huawei.com/
>>>
>>> Ard Biesheuvel (2):
>>>     arm64: simplify rules for defining ARM64_MEMSTART_ALIGN
>>>     arm64: kaslr: take free space at start of DRAM into account
>>>
>>>    arch/arm64/include/asm/kernel-pgtable.h | 27 +++-----------------
>>>    arch/arm64/mm/init.c                    |  3 ++-
>>>    2 files changed, 6 insertions(+), 24 deletions(-)
>>>
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-02-15  2:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-15 14:52 [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Ard Biesheuvel
2021-12-15 14:52 ` [PATCH 1/2] arm64: simplify rules for defining ARM64_MEMSTART_ALIGN Ard Biesheuvel
2021-12-15 14:52 ` [PATCH 2/2] arm64: kaslr: take free space at start of DRAM into account Ard Biesheuvel
2021-12-16  7:37 ` [PATCH 0/2] arm64: permit KASLR in linear region even VArange == PArange Kefeng Wang
2021-12-16  8:56   ` Ard Biesheuvel
2021-12-16 11:32     ` Kefeng Wang
2022-02-15  2:09     ` Kefeng Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.