xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm/domain: allocate pages according to the order of struct domain size
@ 2016-06-08  6:54 Jiandi An
  2016-06-08  9:26 ` Julien Grall
  0 siblings, 1 reply; 3+ messages in thread
From: Jiandi An @ 2016-06-08  6:54 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, sstabellini, anjiandi, shankerd

As the number of CPUs supported on the system grows, number of
GIC redistributors and mmio handlers increase.  We need to increase
MAX_RDIST_COUNT and MAX_IO_HANDLER which makes size of struct domain
bigger than one page.

Remove the BUILD_BUG_ON check for if size of struct domain is greater
than PAGE_SIZE.  And allocate xenheap pages according to the order of
the size of struct domain.

Signed-off-by: Jiandi An <anjiandi@codeaurora.org>
---
 xen/arch/arm/domain.c      | 5 +++--
 xen/include/asm-arm/gic.h  | 2 +-
 xen/include/asm-arm/mmio.h | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 1365b4a..7f69236 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -438,8 +438,9 @@ void startup_cpu_idle_loop(void)
 struct domain *alloc_domain_struct(void)
 {
     struct domain *d;
-    BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
-    d = alloc_xenheap_pages(0, 0);
+    unsigned int order = get_order_from_bytes(sizeof(*d));
+
+    d = alloc_xenheap_pages(order, 0);
     if ( d == NULL )
         return NULL;
 
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index cd97bb2..8165de6 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -20,7 +20,7 @@
 
 #define NR_GIC_LOCAL_IRQS  NR_LOCAL_IRQS
 #define NR_GIC_SGI         16
-#define MAX_RDIST_COUNT    4
+#define MAX_RDIST_COUNT    64 
 
 #define GICD_CTLR       (0x000)
 #define GICD_TYPER      (0x004)
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index da1cc2e..798d373 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -23,7 +23,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 
-#define MAX_IO_HANDLER  16
+#define MAX_IO_HANDLER  32
 
 typedef struct
 {
-- 
Jiandi An
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] arm/domain: allocate pages according to the order of struct domain size
  2016-06-08  6:54 [PATCH] arm/domain: allocate pages according to the order of struct domain size Jiandi An
@ 2016-06-08  9:26 ` Julien Grall
  2016-06-09  0:09   ` Jiandi An
  0 siblings, 1 reply; 3+ messages in thread
From: Julien Grall @ 2016-06-08  9:26 UTC (permalink / raw)
  To: Jiandi An, xen-devel; +Cc: sstabellini, shankerd

Hello Jiandi,

On 08/06/2016 07:54, Jiandi An wrote:
> As the number of CPUs supported on the system grows, number of
> GIC redistributors and mmio handlers increase.  We need to increase
> MAX_RDIST_COUNT and MAX_IO_HANDLER which makes size of struct domain
> bigger than one page.

With this change, the memory footprint of a domain will increase by 4KB 
even if they don't use GICv3.

What is the size of the domain structure with your patch?

I would much prefer to allocate separate memory for the vGIC 
redistributors if it takes too much space.

>
> Remove the BUILD_BUG_ON check for if size of struct domain is greater
> than PAGE_SIZE.  And allocate xenheap pages according to the order of
> the size of struct domain.
>
> Signed-off-by: Jiandi An <anjiandi@codeaurora.org>
> ---
>  xen/arch/arm/domain.c      | 5 +++--
>  xen/include/asm-arm/gic.h  | 2 +-
>  xen/include/asm-arm/mmio.h | 2 +-
>  3 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 1365b4a..7f69236 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -438,8 +438,9 @@ void startup_cpu_idle_loop(void)
>  struct domain *alloc_domain_struct(void)
>  {
>      struct domain *d;
> -    BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
> -    d = alloc_xenheap_pages(0, 0);
> +    unsigned int order = get_order_from_bytes(sizeof(*d));
> +
> +    d = alloc_xenheap_pages(order, 0);
>      if ( d == NULL )
>          return NULL;
>
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index cd97bb2..8165de6 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -20,7 +20,7 @@
>
>  #define NR_GIC_LOCAL_IRQS  NR_LOCAL_IRQS
>  #define NR_GIC_SGI         16
> -#define MAX_RDIST_COUNT    4
> +#define MAX_RDIST_COUNT    64

How many re-distributor regions does your platform have?

>
>  #define GICD_CTLR       (0x000)
>  #define GICD_TYPER      (0x004)
> diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
> index da1cc2e..798d373 100644
> --- a/xen/include/asm-arm/mmio.h
> +++ b/xen/include/asm-arm/mmio.h
> @@ -23,7 +23,7 @@
>  #include <asm/processor.h>
>  #include <asm/regs.h>
>
> -#define MAX_IO_HANDLER  16
> +#define MAX_IO_HANDLER  32

The vGICv3 driver is allocating one I/O handler per redistributor 
region. So if you bump MAX_RDIST_COUNT to 64, you at least need to bump 
MAX_IO_HANDLER to 80.

However, I am bit concerned of the performance impact in long-term 
because the lookup is linear.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] arm/domain: allocate pages according to the order of struct domain size
  2016-06-08  9:26 ` Julien Grall
@ 2016-06-09  0:09   ` Jiandi An
  0 siblings, 0 replies; 3+ messages in thread
From: Jiandi An @ 2016-06-09  0:09 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, shankerd, xen-devel

On 06/08/16 04:26, Julien Grall wrote:
> Hello Jiandi,
> 
> On 08/06/2016 07:54, Jiandi An wrote:
>> As the number of CPUs supported on the system grows, number of
>> GIC redistributors and mmio handlers increase.  We need to increase
>> MAX_RDIST_COUNT and MAX_IO_HANDLER which makes size of struct domain
>> bigger than one page.
> 
> With this change, the memory footprint of a domain will increase by 4KB even if they don't use GICv3.
> 
> What is the size of the domain structure with your patch?
> 
> I would much prefer to allocate separate memory for the vGIC redistributors if it takes too much space.
> 

Hi Julien, the intent of this patch is in preparation for introducing the patch to support
redistributor parsing from GICC subtable, which is tied to number of CPUs on the system.
Current MAX_RDIST_COUNT and MAX_IO_HANDLER are not enough and it drives the domain struct size bigger.

I do see the better way is to leave the domain struct alone and allocate memory for vGIC redistributors
separately and dynamically based on number of CPU interfaces from GICC subtable.

Thanks for your sugguestion.  We'll bundle the dynanmic memory allocation for redistributors
and io handlers in the patch that's coming in for supporting redistributor parsing from GICC subtable.

>>
>> Remove the BUILD_BUG_ON check for if size of struct domain is greater
>> than PAGE_SIZE.  And allocate xenheap pages according to the order of
>> the size of struct domain.
>>
>> Signed-off-by: Jiandi An <anjiandi@codeaurora.org>
>> ---
>>  xen/arch/arm/domain.c      | 5 +++--
>>  xen/include/asm-arm/gic.h  | 2 +-
>>  xen/include/asm-arm/mmio.h | 2 +-
>>  3 files changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 1365b4a..7f69236 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -438,8 +438,9 @@ void startup_cpu_idle_loop(void)
>>  struct domain *alloc_domain_struct(void)
>>  {
>>      struct domain *d;
>> -    BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
>> -    d = alloc_xenheap_pages(0, 0);
>> +    unsigned int order = get_order_from_bytes(sizeof(*d));
>> +
>> +    d = alloc_xenheap_pages(order, 0);
>>      if ( d == NULL )
>>          return NULL;
>>
>> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
>> index cd97bb2..8165de6 100644
>> --- a/xen/include/asm-arm/gic.h
>> +++ b/xen/include/asm-arm/gic.h
>> @@ -20,7 +20,7 @@
>>
>>  #define NR_GIC_LOCAL_IRQS  NR_LOCAL_IRQS
>>  #define NR_GIC_SGI         16
>> -#define MAX_RDIST_COUNT    4
>> +#define MAX_RDIST_COUNT    64
> 
> How many re-distributor regions does your platform have?

It's more than the current cap of 4.

> 
>>
>>  #define GICD_CTLR       (0x000)
>>  #define GICD_TYPER      (0x004)
>> diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
>> index da1cc2e..798d373 100644
>> --- a/xen/include/asm-arm/mmio.h
>> +++ b/xen/include/asm-arm/mmio.h
>> @@ -23,7 +23,7 @@
>>  #include <asm/processor.h>
>>  #include <asm/regs.h>
>>
>> -#define MAX_IO_HANDLER  16
>> +#define MAX_IO_HANDLER  32
> 
> The vGICv3 driver is allocating one I/O handler per redistributor region. So if you bump MAX_RDIST_COUNT to 64, you at least need to bump MAX_IO_HANDLER to 80.
> 
> However, I am bit concerned of the performance impact in long-term because the lookup is linear.

Agreed that this also needs to be figured out and allocated dynamically.

> 
> Regards,
> 


-- 
Jiandi An
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-06-09  0:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-08  6:54 [PATCH] arm/domain: allocate pages according to the order of struct domain size Jiandi An
2016-06-08  9:26 ` Julien Grall
2016-06-09  0:09   ` Jiandi An

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).