All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH V3 0/3] Extend resources to support more vcpus in single VM.
@ 2017-09-13  4:52 Lan Tianyu
  2017-09-13  4:52 ` [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support Lan Tianyu
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Lan Tianyu @ 2017-09-13  4:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	roger.pau, chao.gao

Change since v2:
       1) Increase page pool size during setting max vcpu
       2) Allocate madt table size according APIC id of each vcpus
       3) Fix some code style issues.

Change since v1:
       1) Increase hap page pool according vcpu number
       2) Use "Processor" syntax to define vcpus with APIC id < 255
in dsdt and use "Device" syntax for other vcpus in ACPI DSDT table.
       3) Use XAPIC structure for vcpus with APIC id < 255
in dsdt and use x2APIC structure for other vcpus in the ACPI MADT table.

This patchset is to extend some resources(i.e, event channel,
hap and so) to support more vcpus for single VM.


Lan Tianyu (3):
  Xen: Increase hap/shadow page pool size to support more vcpus support
  Tool/ACPI: DSDT extension to support more vcpus
  hvmload: Add x2apic entry support in the MADT build

 tools/libacpi/acpi2_0.h  | 10 +++++++++
 tools/libacpi/build.c    | 56 +++++++++++++++++++++++++++++++++++-------------
 tools/libacpi/mk_dsdt.c  | 31 +++++++++++++++++++++------
 xen/arch/arm/domain.c    |  5 +++++
 xen/arch/x86/domain.c    | 25 +++++++++++++++++++++
 xen/common/domctl.c      |  3 +++
 xen/include/xen/domain.h |  2 ++
 7 files changed, 111 insertions(+), 21 deletions(-)

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-13  4:52 [RFC PATCH V3 0/3] Extend resources to support more vcpus in single VM Lan Tianyu
@ 2017-09-13  4:52 ` Lan Tianyu
  2017-09-18 13:06   ` Wei Liu
  2017-09-13  4:52 ` [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus Lan Tianyu
  2017-09-13  4:52 ` [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
  2 siblings, 1 reply; 19+ messages in thread
From: Lan Tianyu @ 2017-09-13  4:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, tim, julien.grall, jbeulich,
	roger.pau, chao.gao

This patch is to increase page pool size when max vcpu number is larger
than 128.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/arm/domain.c    |  5 +++++
 xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
 xen/common/domctl.c      |  3 +++
 xen/include/xen/domain.h |  2 ++
 4 files changed, 35 insertions(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6512f01..94cf70b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
     return 0;
 }
 
+int arch_domain_set_max_vcpus(struct domain *d)
+{
+    return 0;
+}
+
 static int relinquish_memory(struct domain *d, struct page_list_head *list)
 {
     struct page_info *page, *tmp;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index dbddc53..0e230f9 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
     return 0;
 }
 
+int arch_domain_set_max_vcpus(struct domain *d)
+{
+    int ret;
+
+    /* Increase page pool in order to support more vcpus. */
+    if ( d->max_vcpus > 128 )
+    {
+        unsigned long nr_pages;
+
+        if (hap_enabled(d))
+            nr_pages = 1024;
+        else
+            nr_pages = 4096;
+
+        ret = paging_set_allocation(d, nr_pages, NULL);
+        if ( ret != 0 )
+        {
+            paging_set_allocation(d, 0, NULL);
+            return ret;
+        }
+    }
+
+    return 0;
+}
+
 long
 arch_do_vcpu_op(
     int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 42658e5..64357a3 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -631,6 +631,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             d->max_vcpus = max;
         }
 
+        if ( arch_domain_set_max_vcpus(d) < 0)
+            goto maxvcpu_out;
+
         for ( i = 0; i < max; i++ )
         {
             if ( d->vcpu[i] != NULL )
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 347f264..e1ece3a 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -81,6 +81,8 @@ void arch_dump_domain_info(struct domain *d);
 
 int arch_vcpu_reset(struct vcpu *);
 
+int arch_domain_set_max_vcpus(struct domain *d);
+
 extern spinlock_t vcpu_alloc_lock;
 bool_t domctl_lock_acquire(void);
 void domctl_lock_release(void);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-13  4:52 [RFC PATCH V3 0/3] Extend resources to support more vcpus in single VM Lan Tianyu
  2017-09-13  4:52 ` [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support Lan Tianyu
@ 2017-09-13  4:52 ` Lan Tianyu
  2017-09-19 13:29   ` Roger Pau Monné
  2017-09-13  4:52 ` [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
  2 siblings, 1 reply; 19+ messages in thread
From: Lan Tianyu @ 2017-09-13  4:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, jbeulich,
	roger.pau, chao.gao

This patch is to change DSDT table for processor object to support >128 vcpus
accroding to ACPI spec 8.4 Declaring Processors

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libacpi/mk_dsdt.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index 2daf32c..09c1529 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -24,6 +24,8 @@
 #include <xen/arch-arm.h>
 #endif
 
+#define CPU_NAME_FMT      "P%.03X"
+
 static unsigned int indent_level;
 static bool debug = false;
 
@@ -196,10 +198,27 @@ int main(int argc, char **argv)
     /* Define processor objects and control methods. */
     for ( cpu = 0; cpu < max_cpus; cpu++)
     {
-        push_block("Processor", "PR%02X, %d, 0x0000b010, 0x06", cpu, cpu);
 
-        stmt("Name", "_HID, \"ACPI0007\"");
+#ifdef CONFIG_X86
+        unsigned int apic_id = cpu * 2;
+
+        if ( apic_id > 254 )
+        {
+            push_block("Device", CPU_NAME_FMT, cpu);
+        }
+        else
+#endif
+        {
+            if (cpu > 255)
+            {
+                fprintf(stderr, "Exceed the range of processor ID \n");
+                return -1;
+            }
+            push_block("Processor", CPU_NAME_FMT ", %d,0x0000b010, 0x06",
+                       cpu, cpu);
+        }
 
+        stmt("Name", "_HID, \"ACPI0007\"");
         stmt("Name", "_UID, %d", cpu);
 #ifdef CONFIG_ARM_64
         pop_block();
@@ -268,15 +287,15 @@ int main(int argc, char **argv)
         /* Extract current CPU's status: 0=offline; 1=online. */
         stmt("And", "Local1, 1, Local2");
         /* Check if status is up-to-date in the relevant MADT LAPIC entry... */
-        push_block("If", "LNotEqual(Local2, \\_SB.PR%02X.FLG)", cpu);
+        push_block("If", "LNotEqual(Local2, \\_SB." CPU_NAME_FMT ".FLG)", cpu);
         /* ...If not, update it and the MADT checksum, and notify OSPM. */
-        stmt("Store", "Local2, \\_SB.PR%02X.FLG", cpu);
+        stmt("Store", "Local2, \\_SB." CPU_NAME_FMT ".FLG", cpu);
         push_block("If", "LEqual(Local2, 1)");
-        stmt("Notify", "PR%02X, 1", cpu); /* Notify: Device Check */
+        stmt("Notify", CPU_NAME_FMT ", 1", cpu); /* Notify: Device Check */
         stmt("Subtract", "\\_SB.MSU, 1, \\_SB.MSU"); /* Adjust MADT csum */
         pop_block();
         push_block("Else", NULL);
-        stmt("Notify", "PR%02X, 3", cpu); /* Notify: Eject Request */
+        stmt("Notify", CPU_NAME_FMT ", 3", cpu); /* Notify: Eject Request */
         stmt("Add", "\\_SB.MSU, 1, \\_SB.MSU"); /* Adjust MADT csum */
         pop_block();
         pop_block();
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build
  2017-09-13  4:52 [RFC PATCH V3 0/3] Extend resources to support more vcpus in single VM Lan Tianyu
  2017-09-13  4:52 ` [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support Lan Tianyu
  2017-09-13  4:52 ` [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus Lan Tianyu
@ 2017-09-13  4:52 ` Lan Tianyu
  2017-09-19 13:41   ` Roger Pau Monné
  2 siblings, 1 reply; 19+ messages in thread
From: Lan Tianyu @ 2017-09-13  4:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, jbeulich,
	roger.pau, chao.gao

This patch is to add x2apic entry support for ACPI MADT table
according to ACPI spec 5.2.12.12 Processor Local x2APIC Structure.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libacpi/acpi2_0.h | 10 +++++++++
 tools/libacpi/build.c   | 56 ++++++++++++++++++++++++++++++++++++-------------
 2 files changed, 51 insertions(+), 15 deletions(-)

diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 2619ba3..ada5131 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -322,6 +322,7 @@ struct acpi_20_waet {
 #define ACPI_IO_SAPIC                       0x06
 #define ACPI_PROCESSOR_LOCAL_SAPIC          0x07
 #define ACPI_PLATFORM_INTERRUPT_SOURCES     0x08
+#define ACPI_PROCESSOR_LOCAL_X2APIC         0x09
 
 /*
  * APIC Structure Definitions.
@@ -338,6 +339,15 @@ struct acpi_20_madt_lapic {
     uint32_t flags;
 };
 
+struct acpi_20_madt_x2apic {
+    uint8_t  type;
+    uint8_t  length;
+    uint16_t reserved;          /* reserved - must be zero */
+    uint32_t apic_id;           /* Processor x2APIC ID  */
+    uint32_t flags;
+    uint32_t acpi_processor_id; /* ACPI processor UID */
+};
+
 /*
  * Local APIC Flags.  All other bits are reserved and must be 0.
  */
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index f9881c9..4830339 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -78,9 +78,9 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
     struct acpi_20_madt           *madt;
     struct acpi_20_madt_intsrcovr *intsrcovr;
     struct acpi_20_madt_ioapic    *io_apic;
-    struct acpi_20_madt_lapic     *lapic;
     const struct hvm_info_table   *hvminfo = config->hvminfo;
     int i, sz;
+    void *end;
 
     if ( config->lapic_id == NULL )
         return NULL;
@@ -88,7 +88,14 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
     sz  = sizeof(struct acpi_20_madt);
     sz += sizeof(struct acpi_20_madt_intsrcovr) * 16;
     sz += sizeof(struct acpi_20_madt_ioapic);
-    sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
+
+    for ( i = 0; i < hvminfo->nr_vcpus; i++ )
+    {
+        if ( config->lapic_id(i) > 254)
+            sz += sizeof(struct acpi_20_madt_x2apic);
+        else
+            sz += sizeof(struct acpi_20_madt_lapic);
+    }
 
     madt = ctxt->mem_ops.alloc(ctxt, sz, 16);
     if (!madt) return NULL;
@@ -142,27 +149,46 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
         io_apic->ioapic_id   = config->ioapic_id;
         io_apic->ioapic_addr = config->ioapic_base_address;
 
-        lapic = (struct acpi_20_madt_lapic *)(io_apic + 1);
+        end = (struct acpi_20_madt_lapic *)(io_apic + 1);
     }
     else
-        lapic = (struct acpi_20_madt_lapic *)(madt + 1);
+        end = (struct acpi_20_madt_lapic *)(madt + 1);
 
     info->nr_cpus = hvminfo->nr_vcpus;
-    info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, lapic);
+    info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, end);
+
     for ( i = 0; i < hvminfo->nr_vcpus; i++ )
     {
-        memset(lapic, 0, sizeof(*lapic));
-        lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
-        lapic->length  = sizeof(*lapic);
-        /* Processor ID must match processor-object IDs in the DSDT. */
-        lapic->acpi_processor_id = i;
-        lapic->apic_id = config->lapic_id(i);
-        lapic->flags = (test_bit(i, hvminfo->vcpu_online)
-                        ? ACPI_LOCAL_APIC_ENABLED : 0);
-        lapic++;
+        unsigned int apic_id = config->lapic_id(i);
+
+        if ( apic_id < 255 ) {
+            struct acpi_20_madt_lapic *lapic = end;
+
+            memset(lapic, 0, sizeof(*lapic));
+            lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
+            lapic->length  = sizeof(*lapic);
+            /* Processor ID must match processor-object IDs in the DSDT. */
+            lapic->acpi_processor_id = i;
+            lapic->apic_id = apic_id;
+            lapic->flags = test_bit(i, hvminfo->vcpu_online)
+                            ? ACPI_LOCAL_APIC_ENABLED : 0;
+            end = ++lapic;
+        } else {
+            struct acpi_20_madt_x2apic *lapic = end;
+
+            memset(lapic, 0, sizeof(*lapic));
+            lapic->type    = ACPI_PROCESSOR_LOCAL_X2APIC;
+            lapic->length  = sizeof(*lapic);
+            /* Processor ID must match processor-object IDs in the DSDT. */
+            lapic->acpi_processor_id = i;
+            lapic->apic_id = apic_id;
+            lapic->flags =  test_bit(i, hvminfo->vcpu_online)
+                            ? ACPI_LOCAL_APIC_ENABLED : 0;
+            end = ++lapic;
+        }
     }
 
-    madt->header.length = (unsigned char *)lapic - (unsigned char *)madt;
+    madt->header.length = (unsigned char *)end - (unsigned char *)madt;
     set_checksum(madt, offsetof(struct acpi_header, checksum),
                  madt->header.length);
     info->madt_csum_addr =
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-13  4:52 ` [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support Lan Tianyu
@ 2017-09-18 13:06   ` Wei Liu
  2017-09-19  3:06     ` Lan Tianyu
  0 siblings, 1 reply; 19+ messages in thread
From: Wei Liu @ 2017-09-18 13:06 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: tim, kevin.tian, sstabellini, wei.liu2, George.Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, julien.grall, jbeulich,
	roger.pau, chao.gao

On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
> This patch is to increase page pool size when max vcpu number is larger
> than 128.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/arch/arm/domain.c    |  5 +++++
>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
>  xen/common/domctl.c      |  3 +++
>  xen/include/xen/domain.h |  2 ++
>  4 files changed, 35 insertions(+)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 6512f01..94cf70b 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
>      return 0;
>  }
>  
> +int arch_domain_set_max_vcpus(struct domain *d)
> +{
> +    return 0;
> +}
> +
>  static int relinquish_memory(struct domain *d, struct page_list_head *list)
>  {
>      struct page_info *page, *tmp;
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index dbddc53..0e230f9 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
>      return 0;
>  }
>  
> +int arch_domain_set_max_vcpus(struct domain *d)

The name doesn't match what the function does.

> +{
> +    int ret;
> +
> +    /* Increase page pool in order to support more vcpus. */
> +    if ( d->max_vcpus > 128 )
> +    {
> +        unsigned long nr_pages;
> +
> +        if (hap_enabled(d))

Coding style.

> +            nr_pages = 1024;
> +        else
> +            nr_pages = 4096;
> +
> +        ret = paging_set_allocation(d, nr_pages, NULL);

Does this work on PV guests?

> +        if ( ret != 0 )
> +        {
> +            paging_set_allocation(d, 0, NULL);
> +            return ret;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  long
>  arch_do_vcpu_op(
>      int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 42658e5..64357a3 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -631,6 +631,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              d->max_vcpus = max;
>          }
>  
> +        if ( arch_domain_set_max_vcpus(d) < 0)

!= 0 please.

> +            goto maxvcpu_out;
> +
>          for ( i = 0; i < max; i++ )
>          {
>              if ( d->vcpu[i] != NULL )
> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index 347f264..e1ece3a 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -81,6 +81,8 @@ void arch_dump_domain_info(struct domain *d);
>  
>  int arch_vcpu_reset(struct vcpu *);
>  
> +int arch_domain_set_max_vcpus(struct domain *d);
> +
>  extern spinlock_t vcpu_alloc_lock;
>  bool_t domctl_lock_acquire(void);
>  void domctl_lock_release(void);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-18 13:06   ` Wei Liu
@ 2017-09-19  3:06     ` Lan Tianyu
  2017-09-20 15:13       ` Wei Liu
  0 siblings, 1 reply; 19+ messages in thread
From: Lan Tianyu @ 2017-09-19  3:06 UTC (permalink / raw)
  To: Wei Liu
  Cc: tim, kevin.tian, sstabellini, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, jbeulich, roger.pau,
	chao.gao

Hi Wei:

On 2017年09月18日 21:06, Wei Liu wrote:
> On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
>> This patch is to increase page pool size when max vcpu number is larger
>> than 128.
>>
>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>> ---
>>  xen/arch/arm/domain.c    |  5 +++++
>>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
>>  xen/common/domctl.c      |  3 +++
>>  xen/include/xen/domain.h |  2 ++
>>  4 files changed, 35 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 6512f01..94cf70b 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
>>      return 0;
>>  }
>>  
>> +int arch_domain_set_max_vcpus(struct domain *d)
>> +{
>> +    return 0;
>> +}
>> +
>>  static int relinquish_memory(struct domain *d, struct page_list_head *list)
>>  {
>>      struct page_info *page, *tmp;
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index dbddc53..0e230f9 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
>>      return 0;
>>  }
>>  
>> +int arch_domain_set_max_vcpus(struct domain *d)
> 
> The name doesn't match what the function does.
> 

I originally hoped to introduce a hook for each arch when set max vcpus.
Each arch function can do customized thing and so named
"arch_domain_set_max_vcpus".

How about "arch_domain_setup_vcpus_resource"?


>> +{
>> +    int ret;
>> +
>> +    /* Increase page pool in order to support more vcpus. */
>> +    if ( d->max_vcpus > 128 )
>> +    {
>> +        unsigned long nr_pages;
>> +
>> +        if (hap_enabled(d))
> 
> Coding style.

Will update. Thanks.

> 
>> +            nr_pages = 1024;
>> +        else
>> +            nr_pages = 4096;
>> +
>> +        ret = paging_set_allocation(d, nr_pages, NULL);
> 
> Does this work on PV guests?


Sorry. This code should not run for PV guest. Will add a domain type
check here.

> 
>> +        if ( ret != 0 )
>> +        {
>> +            paging_set_allocation(d, 0, NULL);
>> +            return ret;
>> +        }
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>  long
>>  arch_do_vcpu_op(
>>      int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
>> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
>> index 42658e5..64357a3 100644
>> --- a/xen/common/domctl.c
>> +++ b/xen/common/domctl.c
>> @@ -631,6 +631,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>              d->max_vcpus = max;
>>          }
>>  
>> +        if ( arch_domain_set_max_vcpus(d) < 0)
> 
> != 0 please.
> 

Sure.

>> +            goto maxvcpu_out;
>> +
>>          for ( i = 0; i < max; i++ )
>>          {
>>              if ( d->vcpu[i] != NULL )
>> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
>> index 347f264..e1ece3a 100644
>> --- a/xen/include/xen/domain.h
>> +++ b/xen/include/xen/domain.h
>> @@ -81,6 +81,8 @@ void arch_dump_domain_info(struct domain *d);
>>  
>>  int arch_vcpu_reset(struct vcpu *);
>>  
>> +int arch_domain_set_max_vcpus(struct domain *d);
>> +
>>  extern spinlock_t vcpu_alloc_lock;
>>  bool_t domctl_lock_acquire(void);
>>  void domctl_lock_release(void);
>> -- 
>> 1.8.3.1
>>


-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-13  4:52 ` [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus Lan Tianyu
@ 2017-09-19 13:29   ` Roger Pau Monné
  2017-09-19 13:44     ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Roger Pau Monné @ 2017-09-19 13:29 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, ian.jackson, xen-devel, Julien Grall,
	jbeulich, chao.gao

On Wed, Sep 13, 2017 at 12:52:48AM -0400, Lan Tianyu wrote:
> This patch is to change DSDT table for processor object to support >128 vcpus
> accroding to ACPI spec 8.4 Declaring Processors
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  tools/libacpi/mk_dsdt.c | 31 +++++++++++++++++++++++++------
>  1 file changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
> index 2daf32c..09c1529 100644
> --- a/tools/libacpi/mk_dsdt.c
> +++ b/tools/libacpi/mk_dsdt.c
> @@ -24,6 +24,8 @@
>  #include <xen/arch-arm.h>
>  #endif
>  
> +#define CPU_NAME_FMT      "P%.03X"
> +
>  static unsigned int indent_level;
>  static bool debug = false;
>  
> @@ -196,10 +198,27 @@ int main(int argc, char **argv)
>      /* Define processor objects and control methods. */
>      for ( cpu = 0; cpu < max_cpus; cpu++)
>      {
> -        push_block("Processor", "PR%02X, %d, 0x0000b010, 0x06", cpu, cpu);
>  
> -        stmt("Name", "_HID, \"ACPI0007\"");
> +#ifdef CONFIG_X86
> +        unsigned int apic_id = cpu * 2;

As said earlier, I don't like have some many places where apic id is
calculated. Please look into unifying those.

Also, declaring a new variable here is wrong.

> +
> +        if ( apic_id > 254 )

255? An APIC ID of 255 should still be fine.

> +        {
> +            push_block("Device", CPU_NAME_FMT, cpu);
> +        }
> +        else
> +#endif
> +        {
> +            if (cpu > 255)
> +            {
> +                fprintf(stderr, "Exceed the range of processor ID \n");
> +                return -1;
> +            }

I'm not sure whether ARM shoudln't just use Device processor objects
directly. x86 has to use Processor because of compatibility reasons,
but I guess that's not an issue for ARM.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build
  2017-09-13  4:52 ` [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
@ 2017-09-19 13:41   ` Roger Pau Monné
  2017-09-19 13:50     ` Roger Pau Monné
  0 siblings, 1 reply; 19+ messages in thread
From: Roger Pau Monné @ 2017-09-19 13:41 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, ian.jackson, xen-devel, jbeulich, chao.gao

On Wed, Sep 13, 2017 at 12:52:49AM -0400, Lan Tianyu wrote:
> @@ -88,7 +88,14 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
>      sz  = sizeof(struct acpi_20_madt);
>      sz += sizeof(struct acpi_20_madt_intsrcovr) * 16;
>      sz += sizeof(struct acpi_20_madt_ioapic);
> -    sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
> +
> +    for ( i = 0; i < hvminfo->nr_vcpus; i++ )
> +    {
> +        if ( config->lapic_id(i) > 254)

I guess you already know I'm going to complain that the way to get the
apic id is different here than in the previous patch.

> +            sz += sizeof(struct acpi_20_madt_x2apic);
> +        else
> +            sz += sizeof(struct acpi_20_madt_lapic);
> +    }
>  
>      madt = ctxt->mem_ops.alloc(ctxt, sz, 16);
>      if (!madt) return NULL;
> @@ -142,27 +149,46 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
>          io_apic->ioapic_id   = config->ioapic_id;
>          io_apic->ioapic_addr = config->ioapic_base_address;
>  
> -        lapic = (struct acpi_20_madt_lapic *)(io_apic + 1);
> +        end = (struct acpi_20_madt_lapic *)(io_apic + 1);
>      }
>      else
> -        lapic = (struct acpi_20_madt_lapic *)(madt + 1);
> +        end = (struct acpi_20_madt_lapic *)(madt + 1);
>  
>      info->nr_cpus = hvminfo->nr_vcpus;
> -    info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, lapic);
> +    info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, end);
> +
>      for ( i = 0; i < hvminfo->nr_vcpus; i++ )
>      {
> -        memset(lapic, 0, sizeof(*lapic));
> -        lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
> -        lapic->length  = sizeof(*lapic);
> -        /* Processor ID must match processor-object IDs in the DSDT. */
> -        lapic->acpi_processor_id = i;
> -        lapic->apic_id = config->lapic_id(i);
> -        lapic->flags = (test_bit(i, hvminfo->vcpu_online)
> -                        ? ACPI_LOCAL_APIC_ENABLED : 0);
> -        lapic++;
> +        unsigned int apic_id = config->lapic_id(i);
> +
> +        if ( apic_id < 255 ) {
> +            struct acpi_20_madt_lapic *lapic = end;
> +
> +            memset(lapic, 0, sizeof(*lapic));
> +            lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
> +            lapic->length  = sizeof(*lapic);
> +            /* Processor ID must match processor-object IDs in the DSDT. */
> +            lapic->acpi_processor_id = i;
> +            lapic->apic_id = apic_id;
> +            lapic->flags = test_bit(i, hvminfo->vcpu_online)
> +                            ? ACPI_LOCAL_APIC_ENABLED : 0;
> +            end = ++lapic;
> +        } else {
> +            struct acpi_20_madt_x2apic *lapic = end;
                                           ^x2apic to avoid confusion?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-19 13:29   ` Roger Pau Monné
@ 2017-09-19 13:44     ` Jan Beulich
  2017-09-19 13:48       ` Roger Pau Monné
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2017-09-19 13:44 UTC (permalink / raw)
  To: Roger Pau Monné, Lan Tianyu
  Cc: kevin.tian, wei.liu2, ian.jackson, xen-devel, Julien Grall, chao.gao

>>> On 19.09.17 at 15:29, <roger.pau@citrix.com> wrote:
> On Wed, Sep 13, 2017 at 12:52:48AM -0400, Lan Tianyu wrote:
>> +        if ( apic_id > 254 )
> 
> 255? An APIC ID of 255 should still be fine.

Wasn't it you who (validly) asked for the boundary to be 254, due
to 0xff being the broadcast value?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-19 13:44     ` Jan Beulich
@ 2017-09-19 13:48       ` Roger Pau Monné
  2017-09-19 13:55         ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Roger Pau Monné @ 2017-09-19 13:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, xen-devel,
	Julien Grall, chao.gao

On Tue, Sep 19, 2017 at 07:44:21AM -0600, Jan Beulich wrote:
> >>> On 19.09.17 at 15:29, <roger.pau@citrix.com> wrote:
> > On Wed, Sep 13, 2017 at 12:52:48AM -0400, Lan Tianyu wrote:
> >> +        if ( apic_id > 254 )
> > 
> > 255? An APIC ID of 255 should still be fine.
> 
> Wasn't it you who (validly) asked for the boundary to be 254, due
> to 0xff being the broadcast value?

But that's the ACPI ID, not the APIC ID.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build
  2017-09-19 13:41   ` Roger Pau Monné
@ 2017-09-19 13:50     ` Roger Pau Monné
  0 siblings, 0 replies; 19+ messages in thread
From: Roger Pau Monné @ 2017-09-19 13:50 UTC (permalink / raw)
  To: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, xen-devel,
	jbeulich, chao.gao

Forgot something on the previous reply.

On Tue, Sep 19, 2017 at 02:41:39PM +0100, Roger Pau Monné wrote:
> On Wed, Sep 13, 2017 at 12:52:49AM -0400, Lan Tianyu wrote:
> > @@ -88,7 +88,14 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
> >      for ( i = 0; i < hvminfo->nr_vcpus; i++ )
> >      {
> > -        memset(lapic, 0, sizeof(*lapic));
> > -        lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
> > -        lapic->length  = sizeof(*lapic);
> > -        /* Processor ID must match processor-object IDs in the DSDT. */
> > -        lapic->acpi_processor_id = i;
> > -        lapic->apic_id = config->lapic_id(i);
> > -        lapic->flags = (test_bit(i, hvminfo->vcpu_online)
> > -                        ? ACPI_LOCAL_APIC_ENABLED : 0);
> > -        lapic++;
> > +        unsigned int apic_id = config->lapic_id(i);
> > +
> > +        if ( apic_id < 255 ) {
> > +            struct acpi_20_madt_lapic *lapic = end;
> > +
> > +            memset(lapic, 0, sizeof(*lapic));
> > +            lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
> > +            lapic->length  = sizeof(*lapic);
> > +            /* Processor ID must match processor-object IDs in the DSDT. */
> > +            lapic->acpi_processor_id = i;

An assert(lapic->acpi_processor_id < 255) would be nice to have here.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-19 13:48       ` Roger Pau Monné
@ 2017-09-19 13:55         ` Jan Beulich
  2017-09-19 14:13           ` Roger Pau Monné
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2017-09-19 13:55 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, xen-devel,
	Julien Grall, chao.gao

>>> On 19.09.17 at 15:48, <roger.pau@citrix.com> wrote:
> On Tue, Sep 19, 2017 at 07:44:21AM -0600, Jan Beulich wrote:
>> >>> On 19.09.17 at 15:29, <roger.pau@citrix.com> wrote:
>> > On Wed, Sep 13, 2017 at 12:52:48AM -0400, Lan Tianyu wrote:
>> >> +        if ( apic_id > 254 )
>> > 
>> > 255? An APIC ID of 255 should still be fine.
>> 
>> Wasn't it you who (validly) asked for the boundary to be 254, due
>> to 0xff being the broadcast value?
> 
> But that's the ACPI ID, not the APIC ID.

The code above says "apic_id" - is the variable mis-named? Or am
I reading your reply the wrong way round, in which case the question
would be why an ACPI ID could ever express something like
"broadcast"?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-19 13:55         ` Jan Beulich
@ 2017-09-19 14:13           ` Roger Pau Monné
  2017-09-19 15:02             ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Roger Pau Monné @ 2017-09-19 14:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, xen-devel,
	Julien Grall, chao.gao

On Tue, Sep 19, 2017 at 07:55:32AM -0600, Jan Beulich wrote:
> >>> On 19.09.17 at 15:48, <roger.pau@citrix.com> wrote:
> > On Tue, Sep 19, 2017 at 07:44:21AM -0600, Jan Beulich wrote:
> >> >>> On 19.09.17 at 15:29, <roger.pau@citrix.com> wrote:
> >> > On Wed, Sep 13, 2017 at 12:52:48AM -0400, Lan Tianyu wrote:
> >> >> +        if ( apic_id > 254 )
> >> > 
> >> > 255? An APIC ID of 255 should still be fine.
> >> 
> >> Wasn't it you who (validly) asked for the boundary to be 254, due
> >> to 0xff being the broadcast value?
> > 
> > But that's the ACPI ID, not the APIC ID.
> 
> The code above says "apic_id" - is the variable mis-named? Or am
> I reading your reply the wrong way round, in which case the question
> would be why an ACPI ID could ever express something like
> "broadcast"?

Yes, sorry I got messed up. This is indeed fine, as a local APIC ID
of 255 is the broadcast ID. But this also applies to the ACPI ID,
since an ACPI ID of 255 is also the broadcast ID for local APIC
entries in the MADT. For example a Local APIC NMI Structure with an
ACPI ID of 255 applies to all local APICs.

We need to be careful to not create local APIC entries with either
APIC or ACPI ID equal to 255 (and to also not create Processor objects
with ACPI ID of 255).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-19 14:13           ` Roger Pau Monné
@ 2017-09-19 15:02             ` Jan Beulich
  2017-09-19 15:35               ` Roger Pau Monné
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2017-09-19 15:02 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, xen-devel,
	Julien Grall, chao.gao

>>> On 19.09.17 at 16:13, <roger.pau@citrix.com> wrote:
> On Tue, Sep 19, 2017 at 07:55:32AM -0600, Jan Beulich wrote:
>> >>> On 19.09.17 at 15:48, <roger.pau@citrix.com> wrote:
>> > On Tue, Sep 19, 2017 at 07:44:21AM -0600, Jan Beulich wrote:
>> >> >>> On 19.09.17 at 15:29, <roger.pau@citrix.com> wrote:
>> >> > On Wed, Sep 13, 2017 at 12:52:48AM -0400, Lan Tianyu wrote:
>> >> >> +        if ( apic_id > 254 )
>> >> > 
>> >> > 255? An APIC ID of 255 should still be fine.
>> >> 
>> >> Wasn't it you who (validly) asked for the boundary to be 254, due
>> >> to 0xff being the broadcast value?
>> > 
>> > But that's the ACPI ID, not the APIC ID.
>> 
>> The code above says "apic_id" - is the variable mis-named? Or am
>> I reading your reply the wrong way round, in which case the question
>> would be why an ACPI ID could ever express something like
>> "broadcast"?
> 
> Yes, sorry I got messed up. This is indeed fine, as a local APIC ID
> of 255 is the broadcast ID. But this also applies to the ACPI ID,
> since an ACPI ID of 255 is also the broadcast ID for local APIC
> entries in the MADT. For example a Local APIC NMI Structure with an
> ACPI ID of 255 applies to all local APICs.

Indeed.

> We need to be careful to not create local APIC entries with either
> APIC or ACPI ID equal to 255 (and to also not create Processor objects
> with ACPI ID of 255).

Why? An ACPI or APIC ID is still fine as long as it does only occur
in x2APIC contexts.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus
  2017-09-19 15:02             ` Jan Beulich
@ 2017-09-19 15:35               ` Roger Pau Monné
  0 siblings, 0 replies; 19+ messages in thread
From: Roger Pau Monné @ 2017-09-19 15:35 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Lan Tianyu, kevin.tian, wei.liu2, ian.jackson, xen-devel,
	Julien Grall, chao.gao

On Tue, Sep 19, 2017 at 09:02:06AM -0600, Jan Beulich wrote:
> >>> On 19.09.17 at 16:13, <roger.pau@citrix.com> wrote:
> > We need to be careful to not create local APIC entries with either
> > APIC or ACPI ID equal to 255 (and to also not create Processor objects
> > with ACPI ID of 255).
> 
> Why? An ACPI or APIC ID is still fine as long as it does only occur
> in x2APIC contexts.

That's what I was trying to reference to with "local APIC entries" and
"Processor objects" as opposed to "x2APIC entries" and "Processor
Devices", which are x2APIC contexts. AFAICT we are talking about the
same.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-19  3:06     ` Lan Tianyu
@ 2017-09-20 15:13       ` Wei Liu
  2017-09-21  8:50         ` Lan Tianyu
  2017-11-28  8:13         ` Chao Gao
  0 siblings, 2 replies; 19+ messages in thread
From: Wei Liu @ 2017-09-20 15:13 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: tim, kevin.tian, sstabellini, Wei Liu, George.Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, julien.grall, jbeulich,
	roger.pau, chao.gao

On Tue, Sep 19, 2017 at 11:06:26AM +0800, Lan Tianyu wrote:
> Hi Wei:
> 
> On 2017年09月18日 21:06, Wei Liu wrote:
> > On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
> >> This patch is to increase page pool size when max vcpu number is larger
> >> than 128.
> >>
> >> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> >> ---
> >>  xen/arch/arm/domain.c    |  5 +++++
> >>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
> >>  xen/common/domctl.c      |  3 +++
> >>  xen/include/xen/domain.h |  2 ++
> >>  4 files changed, 35 insertions(+)
> >>
> >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> >> index 6512f01..94cf70b 100644
> >> --- a/xen/arch/arm/domain.c
> >> +++ b/xen/arch/arm/domain.c
> >> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
> >>      return 0;
> >>  }
> >>  
> >> +int arch_domain_set_max_vcpus(struct domain *d)
> >> +{
> >> +    return 0;
> >> +}
> >> +
> >>  static int relinquish_memory(struct domain *d, struct page_list_head *list)
> >>  {
> >>      struct page_info *page, *tmp;
> >> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> >> index dbddc53..0e230f9 100644
> >> --- a/xen/arch/x86/domain.c
> >> +++ b/xen/arch/x86/domain.c
> >> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
> >>      return 0;
> >>  }
> >>  
> >> +int arch_domain_set_max_vcpus(struct domain *d)
> > 
> > The name doesn't match what the function does.
> > 
> 
> I originally hoped to introduce a hook for each arch when set max vcpus.
> Each arch function can do customized thing and so named
> "arch_domain_set_max_vcpus".
> 
> How about "arch_domain_setup_vcpus_resource"?

Before you go away and do a lot of work, please let us think about if
this is the right approach first.

We are close to freeze, with the amount of patches we receive everyday
RFC patch like this one is low on my (can't speak for others) priority
list. I am not sure when I will be able to get back to this, but do ping
us if you want to know where things stand.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-20 15:13       ` Wei Liu
@ 2017-09-21  8:50         ` Lan Tianyu
  2017-09-21 11:27           ` Jan Beulich
  2017-11-28  8:13         ` Chao Gao
  1 sibling, 1 reply; 19+ messages in thread
From: Lan Tianyu @ 2017-09-21  8:50 UTC (permalink / raw)
  To: Wei Liu, jbeulich
  Cc: tim, kevin.tian, sstabellini, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, roger.pau, chao.gao

On 2017年09月20日 23:13, Wei Liu wrote:
> On Tue, Sep 19, 2017 at 11:06:26AM +0800, Lan Tianyu wrote:
>> Hi Wei:
>>
>> On 2017年09月18日 21:06, Wei Liu wrote:
>>> On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
>>>> This patch is to increase page pool size when max vcpu number is larger
>>>> than 128.
>>>>
>>>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>>>> ---
>>>>  xen/arch/arm/domain.c    |  5 +++++
>>>>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
>>>>  xen/common/domctl.c      |  3 +++
>>>>  xen/include/xen/domain.h |  2 ++
>>>>  4 files changed, 35 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>>> index 6512f01..94cf70b 100644
>>>> --- a/xen/arch/arm/domain.c
>>>> +++ b/xen/arch/arm/domain.c
>>>> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
>>>>      return 0;
>>>>  }
>>>>  
>>>> +int arch_domain_set_max_vcpus(struct domain *d)
>>>> +{
>>>> +    return 0;
>>>> +}
>>>> +
>>>>  static int relinquish_memory(struct domain *d, struct page_list_head *list)
>>>>  {
>>>>      struct page_info *page, *tmp;
>>>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>>>> index dbddc53..0e230f9 100644
>>>> --- a/xen/arch/x86/domain.c
>>>> +++ b/xen/arch/x86/domain.c
>>>> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
>>>>      return 0;
>>>>  }
>>>>  
>>>> +int arch_domain_set_max_vcpus(struct domain *d)
>>>
>>> The name doesn't match what the function does.
>>>
>>
>> I originally hoped to introduce a hook for each arch when set max vcpus.
>> Each arch function can do customized thing and so named
>> "arch_domain_set_max_vcpus".
>>
>> How about "arch_domain_setup_vcpus_resource"?
> 
> Before you go away and do a lot of work, please let us think about if
> this is the right approach first.

Sure. This idea that increase page pool when set max vcpu is from Jan.
Jan, Could you help to check whether current patch is right approach?
Thanks.

> 
> We are close to freeze, with the amount of patches we receive everyday
> RFC patch like this one is low on my (can't speak for others) priority
> list. I am not sure when I will be able to get back to this, but do ping
> us if you want to know where things stand.
> 


-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-21  8:50         ` Lan Tianyu
@ 2017-09-21 11:27           ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2017-09-21 11:27 UTC (permalink / raw)
  To: Wei Liu, Lan Tianyu
  Cc: tim, kevin.tian, sstabellini, George.Dunlap, andrew.cooper3,
	ian.jackson, xen-devel, julien.grall, roger.pau, chao.gao

>>> On 21.09.17 at 10:50, <tianyu.lan@intel.com> wrote:
> On 2017年09月20日 23:13, Wei Liu wrote:
>> On Tue, Sep 19, 2017 at 11:06:26AM +0800, Lan Tianyu wrote:
>>> Hi Wei:
>>>
>>> On 2017年09月18日 21:06, Wei Liu wrote:
>>>> On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
>>>>> This patch is to increase page pool size when max vcpu number is larger
>>>>> than 128.
>>>>>
>>>>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>>>>> ---
>>>>>  xen/arch/arm/domain.c    |  5 +++++
>>>>>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
>>>>>  xen/common/domctl.c      |  3 +++
>>>>>  xen/include/xen/domain.h |  2 ++
>>>>>  4 files changed, 35 insertions(+)
>>>>>
>>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>>>> index 6512f01..94cf70b 100644
>>>>> --- a/xen/arch/arm/domain.c
>>>>> +++ b/xen/arch/arm/domain.c
>>>>> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
>>>>>      return 0;
>>>>>  }
>>>>>  
>>>>> +int arch_domain_set_max_vcpus(struct domain *d)
>>>>> +{
>>>>> +    return 0;
>>>>> +}
>>>>> +
>>>>>  static int relinquish_memory(struct domain *d, struct page_list_head *list)
>>>>>  {
>>>>>      struct page_info *page, *tmp;
>>>>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>>>>> index dbddc53..0e230f9 100644
>>>>> --- a/xen/arch/x86/domain.c
>>>>> +++ b/xen/arch/x86/domain.c
>>>>> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
>>>>>      return 0;
>>>>>  }
>>>>>  
>>>>> +int arch_domain_set_max_vcpus(struct domain *d)
>>>>
>>>> The name doesn't match what the function does.
>>>>
>>>
>>> I originally hoped to introduce a hook for each arch when set max vcpus.
>>> Each arch function can do customized thing and so named
>>> "arch_domain_set_max_vcpus".
>>>
>>> How about "arch_domain_setup_vcpus_resource"?
>> 
>> Before you go away and do a lot of work, please let us think about if
>> this is the right approach first.
> 
> Sure. This idea that increase page pool when set max vcpu is from Jan.
> Jan, Could you help to check whether current patch is right approach?

Whenever I get to it, sure. What I can say right away is that I
agree with the comment about the name of the function.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support
  2017-09-20 15:13       ` Wei Liu
  2017-09-21  8:50         ` Lan Tianyu
@ 2017-11-28  8:13         ` Chao Gao
  1 sibling, 0 replies; 19+ messages in thread
From: Chao Gao @ 2017-11-28  8:13 UTC (permalink / raw)
  To: Wei Liu
  Cc: Lan Tianyu, tim, kevin.tian, sstabellini, George.Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, julien.grall, jbeulich,
	roger.pau

On Wed, Sep 20, 2017 at 04:13:43PM +0100, Wei Liu wrote:
>On Tue, Sep 19, 2017 at 11:06:26AM +0800, Lan Tianyu wrote:
>> Hi Wei:
>> 
>> On 2017年09月18日 21:06, Wei Liu wrote:
>> > On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
>> >> This patch is to increase page pool size when max vcpu number is larger
>> >> than 128.
>> >>
>> >> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>> >> ---
>> >>  xen/arch/arm/domain.c    |  5 +++++
>> >>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
>> >>  xen/common/domctl.c      |  3 +++
>> >>  xen/include/xen/domain.h |  2 ++
>> >>  4 files changed, 35 insertions(+)
>> >>
>> >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> >> index 6512f01..94cf70b 100644
>> >> --- a/xen/arch/arm/domain.c
>> >> +++ b/xen/arch/arm/domain.c
>> >> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
>> >>      return 0;
>> >>  }
>> >>  
>> >> +int arch_domain_set_max_vcpus(struct domain *d)
>> >> +{
>> >> +    return 0;
>> >> +}
>> >> +
>> >>  static int relinquish_memory(struct domain *d, struct page_list_head *list)
>> >>  {
>> >>      struct page_info *page, *tmp;
>> >> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> >> index dbddc53..0e230f9 100644
>> >> --- a/xen/arch/x86/domain.c
>> >> +++ b/xen/arch/x86/domain.c
>> >> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
>> >>      return 0;
>> >>  }
>> >>  
>> >> +int arch_domain_set_max_vcpus(struct domain *d)
>> > 
>> > The name doesn't match what the function does.
>> > 
>> 
>> I originally hoped to introduce a hook for each arch when set max vcpus.
>> Each arch function can do customized thing and so named
>> "arch_domain_set_max_vcpus".
>> 
>> How about "arch_domain_setup_vcpus_resource"?
>
>Before you go away and do a lot of work, please let us think about if
>this is the right approach first.
>
>We are close to freeze, with the amount of patches we receive everyday
>RFC patch like this one is low on my (can't speak for others) priority
>list. I am not sure when I will be able to get back to this, but do ping
>us if you want to know where things stand.

Hi, Wei.

The goal of this patch is to avoid running out of shadow pages. The
number of shadow pages is initialized (to 256 for hap, and 1024 for
shadow) when creating domain. Then the max vcpus is set. In this
process, for each vcpu, construct_vmcs()->paging_update_paging_modes()
->hap_make_monitor_table() always consume a shadow page. If there are
too many vcpus (i.e. more than 256), we would run out of shadow pages.

To address this, there are three solutions:
1) bump up the number of shadow pages to a proper value when setting
max vcpus like what this patch does. Actually, it can be done in
toolstack via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION.
2) toolstack (seeing libxl__arch_domain_create->xc_shadow_control())
enlarges or shrinks the shadow memory to another size (according xl.cfg,
the size is 1MB per guest vCPU plus 8KB per MB of guest RAM) after
setting max vcpus. If the sequence of the two operations can be exchanged,
this issue also disappears.
3) Considering that toolstack finally adjusts the shadow memory to a
proper size, I think enlarging the shadow pages from 256 to 512 just
like what the v1 patch
(https://lists.xenproject.org/archives/html/xen-devel/2017-08/msg03048.html)
does doesn't lead to more memory consumption. Since it introduces
minimal change, I prefer to this one.

Which one do you think is better?

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-11-28  8:13 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-13  4:52 [RFC PATCH V3 0/3] Extend resources to support more vcpus in single VM Lan Tianyu
2017-09-13  4:52 ` [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support Lan Tianyu
2017-09-18 13:06   ` Wei Liu
2017-09-19  3:06     ` Lan Tianyu
2017-09-20 15:13       ` Wei Liu
2017-09-21  8:50         ` Lan Tianyu
2017-09-21 11:27           ` Jan Beulich
2017-11-28  8:13         ` Chao Gao
2017-09-13  4:52 ` [RFC PATCH V3 2/3] Tool/ACPI: DSDT extension to support more vcpus Lan Tianyu
2017-09-19 13:29   ` Roger Pau Monné
2017-09-19 13:44     ` Jan Beulich
2017-09-19 13:48       ` Roger Pau Monné
2017-09-19 13:55         ` Jan Beulich
2017-09-19 14:13           ` Roger Pau Monné
2017-09-19 15:02             ` Jan Beulich
2017-09-19 15:35               ` Roger Pau Monné
2017-09-13  4:52 ` [RFC PATCH V3 3/3] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
2017-09-19 13:41   ` Roger Pau Monné
2017-09-19 13:50     ` Roger Pau Monné

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.