* [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions
@ 2022-04-20 21:09 ` Patrick Daly
0 siblings, 0 replies; 6+ messages in thread
From: Patrick Daly @ 2022-04-20 21:09 UTC (permalink / raw)
To: robh+dt, linux-arm-kernel, devicetree, surenb; +Cc: Patrick Daly
The reserved_mem array must be statically allocated because it is used
prior to memblock being aware of all "no-map" or otherwise reserved
regions which have fixed physical addresses. Due to this limitation,
if one architecture/board has a large number of reserved_mem regions,
this limit must be raised for all.
In particular, certain new qcom boards currently have 63 reserved memory
regions, which when new features are added, pushes them over the existing
limit of 64.
A generalized breakdown by region type:
13 for linux-loaded device firmware
9 for guest-vms or inter-vm communication
15 cma heaps/dma-buf heaps
24 for bootloaders/hypervisor/secure-world devices or software
2 misc
Although this number could be reduced by a minor amount by combining
physically adjacent regions, this comes at the cost of losing
documention on what/who the regions are used by. In addition, combining
adjacent regions is not possible if there are phandles in devicetree
referring to the regions in question, such as "memory-region".
Vmlinux before:
text data bss dec hex filename
31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux
Vmlinux after:
text data bss dec hex filename
31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux
Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
---
drivers/of/of_reserved_mem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 75caa6f..de0cdda 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -26,7 +26,7 @@
#include "of_private.h"
-#define MAX_RESERVED_REGIONS 64
+#define MAX_RESERVED_REGIONS 128
static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
static int reserved_mem_count;
--
2.7.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions
@ 2022-04-20 21:09 ` Patrick Daly
0 siblings, 0 replies; 6+ messages in thread
From: Patrick Daly @ 2022-04-20 21:09 UTC (permalink / raw)
To: robh+dt, linux-arm-kernel, devicetree, surenb; +Cc: Patrick Daly
The reserved_mem array must be statically allocated because it is used
prior to memblock being aware of all "no-map" or otherwise reserved
regions which have fixed physical addresses. Due to this limitation,
if one architecture/board has a large number of reserved_mem regions,
this limit must be raised for all.
In particular, certain new qcom boards currently have 63 reserved memory
regions, which when new features are added, pushes them over the existing
limit of 64.
A generalized breakdown by region type:
13 for linux-loaded device firmware
9 for guest-vms or inter-vm communication
15 cma heaps/dma-buf heaps
24 for bootloaders/hypervisor/secure-world devices or software
2 misc
Although this number could be reduced by a minor amount by combining
physically adjacent regions, this comes at the cost of losing
documention on what/who the regions are used by. In addition, combining
adjacent regions is not possible if there are phandles in devicetree
referring to the regions in question, such as "memory-region".
Vmlinux before:
text data bss dec hex filename
31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux
Vmlinux after:
text data bss dec hex filename
31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux
Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
---
drivers/of/of_reserved_mem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 75caa6f..de0cdda 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -26,7 +26,7 @@
#include "of_private.h"
-#define MAX_RESERVED_REGIONS 64
+#define MAX_RESERVED_REGIONS 128
static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
static int reserved_mem_count;
--
2.7.4
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions
2022-04-20 21:09 ` Patrick Daly
@ 2022-04-22 13:48 ` Rob Herring
-1 siblings, 0 replies; 6+ messages in thread
From: Rob Herring @ 2022-04-22 13:48 UTC (permalink / raw)
To: Patrick Daly; +Cc: linux-arm-kernel, devicetree, surenb
On Wed, Apr 20, 2022 at 4:09 PM Patrick Daly <quic_pdaly@quicinc.com> wrote:
>
> The reserved_mem array must be statically allocated because it is used
> prior to memblock being aware of all "no-map" or otherwise reserved
> regions which have fixed physical addresses. Due to this limitation,
> if one architecture/board has a large number of reserved_mem regions,
> this limit must be raised for all.
>
> In particular, certain new qcom boards currently have 63 reserved memory
> regions, which when new features are added, pushes them over the existing
> limit of 64.
Please revive this instead:
https://lore.kernel.org/all/20211119075844.2902592-3-calvinzhang.cool@gmail.com/
>
> A generalized breakdown by region type:
> 13 for linux-loaded device firmware
If loaded by linux, why do you need fixed carveouts in DT? The devices
can't be told what address their fw is at?
> 9 for guest-vms or inter-vm communication
Why does that need to be in DT?
> 15 cma heaps/dma-buf heaps
Sounds like not trusting the OS to allocate memory itself. From what
I've read, the kernel's memory allocation abilities are better now
than when CMA was added.
> 24 for bootloaders/hypervisor/secure-world devices or software
> 2 misc
>
> Although this number could be reduced by a minor amount by combining
> physically adjacent regions, this comes at the cost of losing
> documention on what/who the regions are used by. In addition, combining
> adjacent regions is not possible if there are phandles in devicetree
> referring to the regions in question, such as "memory-region".
>
> Vmlinux before:
> text data bss dec hex filename
> 31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux
>
> Vmlinux after:
> text data bss dec hex filename
> 31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux
>
> Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
> ---
> drivers/of/of_reserved_mem.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 75caa6f..de0cdda 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -26,7 +26,7 @@
>
> #include "of_private.h"
>
> -#define MAX_RESERVED_REGIONS 64
> +#define MAX_RESERVED_REGIONS 128
> static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
> static int reserved_mem_count;
>
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions
@ 2022-04-22 13:48 ` Rob Herring
0 siblings, 0 replies; 6+ messages in thread
From: Rob Herring @ 2022-04-22 13:48 UTC (permalink / raw)
To: Patrick Daly; +Cc: linux-arm-kernel, devicetree, surenb
On Wed, Apr 20, 2022 at 4:09 PM Patrick Daly <quic_pdaly@quicinc.com> wrote:
>
> The reserved_mem array must be statically allocated because it is used
> prior to memblock being aware of all "no-map" or otherwise reserved
> regions which have fixed physical addresses. Due to this limitation,
> if one architecture/board has a large number of reserved_mem regions,
> this limit must be raised for all.
>
> In particular, certain new qcom boards currently have 63 reserved memory
> regions, which when new features are added, pushes them over the existing
> limit of 64.
Please revive this instead:
https://lore.kernel.org/all/20211119075844.2902592-3-calvinzhang.cool@gmail.com/
>
> A generalized breakdown by region type:
> 13 for linux-loaded device firmware
If loaded by linux, why do you need fixed carveouts in DT? The devices
can't be told what address their fw is at?
> 9 for guest-vms or inter-vm communication
Why does that need to be in DT?
> 15 cma heaps/dma-buf heaps
Sounds like not trusting the OS to allocate memory itself. From what
I've read, the kernel's memory allocation abilities are better now
than when CMA was added.
> 24 for bootloaders/hypervisor/secure-world devices or software
> 2 misc
>
> Although this number could be reduced by a minor amount by combining
> physically adjacent regions, this comes at the cost of losing
> documention on what/who the regions are used by. In addition, combining
> adjacent regions is not possible if there are phandles in devicetree
> referring to the regions in question, such as "memory-region".
>
> Vmlinux before:
> text data bss dec hex filename
> 31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux
>
> Vmlinux after:
> text data bss dec hex filename
> 31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux
>
> Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
> ---
> drivers/of/of_reserved_mem.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 75caa6f..de0cdda 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -26,7 +26,7 @@
>
> #include "of_private.h"
>
> -#define MAX_RESERVED_REGIONS 64
> +#define MAX_RESERVED_REGIONS 128
> static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
> static int reserved_mem_count;
>
> --
> 2.7.4
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions
2022-04-22 13:48 ` Rob Herring
@ 2022-04-25 22:32 ` Patrick Daly
-1 siblings, 0 replies; 6+ messages in thread
From: Patrick Daly @ 2022-04-25 22:32 UTC (permalink / raw)
To: Rob Herring; +Cc: linux-arm-kernel, devicetree, surenb
On Fri, Apr 22, 2022 at 08:48:25AM -0500, Rob Herring wrote:
> On Wed, Apr 20, 2022 at 4:09 PM Patrick Daly <quic_pdaly@quicinc.com> wrote:
> >
> > The reserved_mem array must be statically allocated because it is used
> > prior to memblock being aware of all "no-map" or otherwise reserved
> > regions which have fixed physical addresses. Due to this limitation,
> > if one architecture/board has a large number of reserved_mem regions,
> > this limit must be raised for all.
> >
> > In particular, certain new qcom boards currently have 63 reserved memory
> > regions, which when new features are added, pushes them over the existing
> > limit of 64.
>
> Please revive this instead:
>
> https://lore.kernel.org/all/20211119075844.2902592-3-calvinzhang.cool@gmail.com/
Looks interesting, thanks for pointing it out.
>
> >
> > A generalized breakdown by region type:
> > 13 for linux-loaded device firmware
>
> If loaded by linux, why do you need fixed carveouts in DT? The devices
> can't be told what address their fw is at?
>
These are good questions, but I don't know enough about the various usecases to
give complete answers.
> > 9 for guest-vms or inter-vm communication
>
> Why does that need to be in DT?
Its a type-one hypervisor, so the guest VMs don't run in userspace. We want
to use largely physically contiguous memory to reduce TLB pressure - so a
devicetree carveout seemed to be a simple way of realizing this.
>
> > 15 cma heaps/dma-buf heaps
>
> Sounds like not trusting the OS to allocate memory itself. From what
> I've read, the kernel's memory allocation abilities are better now
> than when CMA was added.
Greater than MAX_ORDER physically contiguous memory is only available through
CMA. So it comes down to whether the usecase supports scattered memory or not.
I believe there is also a concern that placing many usecases in the same
CMA area could lead to fragmentation & allocation failures for sizes which
are a large percentage of the total CMA area size.
>
> > 24 for bootloaders/hypervisor/secure-world devices or software
> > 2 misc
> >
> > Although this number could be reduced by a minor amount by combining
> > physically adjacent regions, this comes at the cost of losing
> > documention on what/who the regions are used by. In addition, combining
> > adjacent regions is not possible if there are phandles in devicetree
> > referring to the regions in question, such as "memory-region".
> >
> > Vmlinux before:
> > text data bss dec hex filename
> > 31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux
> >
> > Vmlinux after:
> > text data bss dec hex filename
> > 31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux
> >
> > Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
> > ---
> > drivers/of/of_reserved_mem.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> > index 75caa6f..de0cdda 100644
> > --- a/drivers/of/of_reserved_mem.c
> > +++ b/drivers/of/of_reserved_mem.c
> > @@ -26,7 +26,7 @@
> >
> > #include "of_private.h"
> >
> > -#define MAX_RESERVED_REGIONS 64
> > +#define MAX_RESERVED_REGIONS 128
> > static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
> > static int reserved_mem_count;
> >
> > --
> > 2.7.4
> >
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions
@ 2022-04-25 22:32 ` Patrick Daly
0 siblings, 0 replies; 6+ messages in thread
From: Patrick Daly @ 2022-04-25 22:32 UTC (permalink / raw)
To: Rob Herring; +Cc: linux-arm-kernel, devicetree, surenb
On Fri, Apr 22, 2022 at 08:48:25AM -0500, Rob Herring wrote:
> On Wed, Apr 20, 2022 at 4:09 PM Patrick Daly <quic_pdaly@quicinc.com> wrote:
> >
> > The reserved_mem array must be statically allocated because it is used
> > prior to memblock being aware of all "no-map" or otherwise reserved
> > regions which have fixed physical addresses. Due to this limitation,
> > if one architecture/board has a large number of reserved_mem regions,
> > this limit must be raised for all.
> >
> > In particular, certain new qcom boards currently have 63 reserved memory
> > regions, which when new features are added, pushes them over the existing
> > limit of 64.
>
> Please revive this instead:
>
> https://lore.kernel.org/all/20211119075844.2902592-3-calvinzhang.cool@gmail.com/
Looks interesting, thanks for pointing it out.
>
> >
> > A generalized breakdown by region type:
> > 13 for linux-loaded device firmware
>
> If loaded by linux, why do you need fixed carveouts in DT? The devices
> can't be told what address their fw is at?
>
These are good questions, but I don't know enough about the various usecases to
give complete answers.
> > 9 for guest-vms or inter-vm communication
>
> Why does that need to be in DT?
Its a type-one hypervisor, so the guest VMs don't run in userspace. We want
to use largely physically contiguous memory to reduce TLB pressure - so a
devicetree carveout seemed to be a simple way of realizing this.
>
> > 15 cma heaps/dma-buf heaps
>
> Sounds like not trusting the OS to allocate memory itself. From what
> I've read, the kernel's memory allocation abilities are better now
> than when CMA was added.
Greater than MAX_ORDER physically contiguous memory is only available through
CMA. So it comes down to whether the usecase supports scattered memory or not.
I believe there is also a concern that placing many usecases in the same
CMA area could lead to fragmentation & allocation failures for sizes which
are a large percentage of the total CMA area size.
>
> > 24 for bootloaders/hypervisor/secure-world devices or software
> > 2 misc
> >
> > Although this number could be reduced by a minor amount by combining
> > physically adjacent regions, this comes at the cost of losing
> > documention on what/who the regions are used by. In addition, combining
> > adjacent regions is not possible if there are phandles in devicetree
> > referring to the regions in question, such as "memory-region".
> >
> > Vmlinux before:
> > text data bss dec hex filename
> > 31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux
> >
> > Vmlinux after:
> > text data bss dec hex filename
> > 31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux
> >
> > Signed-off-by: Patrick Daly <quic_pdaly@quicinc.com>
> > ---
> > drivers/of/of_reserved_mem.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> > index 75caa6f..de0cdda 100644
> > --- a/drivers/of/of_reserved_mem.c
> > +++ b/drivers/of/of_reserved_mem.c
> > @@ -26,7 +26,7 @@
> >
> > #include "of_private.h"
> >
> > -#define MAX_RESERVED_REGIONS 64
> > +#define MAX_RESERVED_REGIONS 128
> > static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
> > static int reserved_mem_count;
> >
> > --
> > 2.7.4
> >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-04-25 22:33 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-20 21:09 [PATCH] of: of_reserved_mem: Increase limit for reserved_mem regions Patrick Daly
2022-04-20 21:09 ` Patrick Daly
2022-04-22 13:48 ` Rob Herring
2022-04-22 13:48 ` Rob Herring
2022-04-25 22:32 ` Patrick Daly
2022-04-25 22:32 ` Patrick Daly
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.