All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
@ 2017-08-25  2:52 Lan Tianyu
  2017-08-25  2:52 ` [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support Lan Tianyu
                   ` (5 more replies)
  0 siblings, 6 replies; 31+ messages in thread
From: Lan Tianyu @ 2017-08-25  2:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, george.dunlap, andrew.cooper3,
	ian.jackson, jbeulich, chao.gao

This patchset is to extend some resources(i.e, event channel,
hap and so) to support more vcpus for single VM.

Chao Gao (1):
  xl/libacpi: extend lapic_id() to uint32_t

Lan Tianyu (4):
  xen/hap: Increase hap size for more vcpus support
  XL: Increase event channels to support more vcpus
  Tool/ACPI: DSDT extension to support more vcpus
  hvmload: Add x2apic entry support in the MADT build

 tools/firmware/hvmloader/util.c |  2 +-
 tools/libacpi/acpi2_0.h         | 10 +++++++
 tools/libacpi/build.c           | 61 +++++++++++++++++++++++++++++------------
 tools/libacpi/libacpi.h         |  2 +-
 tools/libacpi/mk_dsdt.c         | 11 ++++----
 tools/libxl/libxl_create.c      |  2 +-
 tools/libxl/libxl_x86_acpi.c    |  2 +-
 xen/arch/x86/mm/hap/hap.c       |  2 +-
 8 files changed, 63 insertions(+), 29 deletions(-)

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support
  2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
@ 2017-08-25  2:52 ` Lan Tianyu
  2017-08-25  9:14   ` Wei Liu
  2017-08-25  2:52 ` [RFC PATCH 2/5] XL: Increase event channels to support more vcpus Lan Tianyu
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 31+ messages in thread
From: Lan Tianyu @ 2017-08-25  2:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, george.dunlap, andrew.cooper3,
	ian.jackson, jbeulich, chao.gao

This patch is to increase hap size to support more vcpus in single VM.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/mm/hap/hap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index cdc77a9..cb81368 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -473,7 +473,7 @@ int hap_enable(struct domain *d, u32 mode)
     if ( old_pages == 0 )
     {
         paging_lock(d);
-        rv = hap_set_allocation(d, 256, NULL);
+        rv = hap_set_allocation(d, 512, NULL);
         if ( rv != 0 )
         {
             hap_set_allocation(d, 0, NULL);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
  2017-08-25  2:52 ` [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support Lan Tianyu
@ 2017-08-25  2:52 ` Lan Tianyu
  2017-08-25  9:18   ` Wei Liu
  2017-08-25  2:52 ` [RFC PATCH 3/5] Tool/ACPI: DSDT extension " Lan Tianyu
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 31+ messages in thread
From: Lan Tianyu @ 2017-08-25  2:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to increase event channels to support more vcpus in single VM.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libxl/libxl_create.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1158303..3937169 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -210,7 +210,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
             b_info->iomem[i].gfn = b_info->iomem[i].start;
 
     if (!b_info->event_channels)
-        b_info->event_channels = 1023;
+        b_info->event_channels = 4095;
 
     libxl__arch_domain_build_info_acpi_setdefault(b_info);
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
  2017-08-25  2:52 ` [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support Lan Tianyu
  2017-08-25  2:52 ` [RFC PATCH 2/5] XL: Increase event channels to support more vcpus Lan Tianyu
@ 2017-08-25  2:52 ` Lan Tianyu
  2017-08-25  9:25   ` Wei Liu
  2017-08-25 10:36   ` Roger Pau Monné
  2017-08-25  2:52 ` [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 31+ messages in thread
From: Lan Tianyu @ 2017-08-25  2:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to change DSDT table for processor object to support >255 vcpus.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libacpi/mk_dsdt.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index 2daf32c..d37aed6 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -196,8 +196,7 @@ int main(int argc, char **argv)
     /* Define processor objects and control methods. */
     for ( cpu = 0; cpu < max_cpus; cpu++)
     {
-        push_block("Processor", "PR%02X, %d, 0x0000b010, 0x06", cpu, cpu);
-
+        push_block("Device", "P%03X", cpu);
         stmt("Name", "_HID, \"ACPI0007\"");
 
         stmt("Name", "_UID, %d", cpu);
@@ -268,15 +267,15 @@ int main(int argc, char **argv)
         /* Extract current CPU's status: 0=offline; 1=online. */
         stmt("And", "Local1, 1, Local2");
         /* Check if status is up-to-date in the relevant MADT LAPIC entry... */
-        push_block("If", "LNotEqual(Local2, \\_SB.PR%02X.FLG)", cpu);
+        push_block("If", "LNotEqual(Local2, \\_SB.P%03X.FLG)", cpu);
         /* ...If not, update it and the MADT checksum, and notify OSPM. */
-        stmt("Store", "Local2, \\_SB.PR%02X.FLG", cpu);
+        stmt("Store", "Local2, \\_SB.P%03X.FLG", cpu);
         push_block("If", "LEqual(Local2, 1)");
-        stmt("Notify", "PR%02X, 1", cpu); /* Notify: Device Check */
+        stmt("Notify", "P%03X, 1", cpu); /* Notify: Device Check */
         stmt("Subtract", "\\_SB.MSU, 1, \\_SB.MSU"); /* Adjust MADT csum */
         pop_block();
         push_block("Else", NULL);
-        stmt("Notify", "PR%02X, 3", cpu); /* Notify: Eject Request */
+        stmt("Notify", "P%03X, 3", cpu); /* Notify: Eject Request */
         stmt("Add", "\\_SB.MSU, 1, \\_SB.MSU"); /* Adjust MADT csum */
         pop_block();
         pop_block();
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build
  2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
                   ` (2 preceding siblings ...)
  2017-08-25  2:52 ` [RFC PATCH 3/5] Tool/ACPI: DSDT extension " Lan Tianyu
@ 2017-08-25  2:52 ` Lan Tianyu
  2017-08-25  9:26   ` Wei Liu
  2017-08-25 10:11   ` Roger Pau Monné
  2017-08-25  2:52 ` [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t Lan Tianyu
  2017-08-25 14:10 ` [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Meng Xu
  5 siblings, 2 replies; 31+ messages in thread
From: Lan Tianyu @ 2017-08-25  2:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to add x2apic entry support for ACPI MADT table.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 tools/libacpi/acpi2_0.h | 10 ++++++++
 tools/libacpi/build.c   | 61 ++++++++++++++++++++++++++++++++++---------------
 2 files changed, 53 insertions(+), 18 deletions(-)

diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 758a823..ff18b3e 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -322,6 +322,7 @@ struct acpi_20_waet {
 #define ACPI_IO_SAPIC                       0x06
 #define ACPI_PROCESSOR_LOCAL_SAPIC          0x07
 #define ACPI_PLATFORM_INTERRUPT_SOURCES     0x08
+#define ACPI_PROCESSOR_LOCAL_X2APIC         0x09
 
 /*
  * APIC Structure Definitions.
@@ -338,6 +339,15 @@ struct acpi_20_madt_lapic {
     uint32_t flags;
 };
 
+struct acpi_20_madt_x2apic {
+    uint8_t  type;
+    uint8_t  length;
+    uint16_t reserved;		    /* reserved - must be zero */
+    uint32_t apic_id;           /* Processor x2APIC ID  */
+    uint32_t flags;
+    uint32_t acpi_processor_id;	/* ACPI processor UID */
+};
+
 /*
  * Local APIC Flags.  All other bits are reserved and must be 0.
  */
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index c7cc784..36e582a 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -82,9 +82,9 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
     struct acpi_20_madt           *madt;
     struct acpi_20_madt_intsrcovr *intsrcovr;
     struct acpi_20_madt_ioapic    *io_apic;
-    struct acpi_20_madt_lapic     *lapic;
     const struct hvm_info_table   *hvminfo = config->hvminfo;
     int i, sz;
+    void *end;
 
     if ( config->lapic_id == NULL )
         return NULL;
@@ -92,7 +92,11 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
     sz  = sizeof(struct acpi_20_madt);
     sz += sizeof(struct acpi_20_madt_intsrcovr) * 16;
     sz += sizeof(struct acpi_20_madt_ioapic);
-    sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
+
+    if (hvminfo->nr_vcpus < 256)
+        sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
+    else
+        sz += sizeof(struct acpi_20_madt_x2apic) * hvminfo->nr_vcpus;
 
     madt = ctxt->mem_ops.alloc(ctxt, sz, 16);
     if (!madt) return NULL;
@@ -146,27 +150,48 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
         io_apic->ioapic_id   = config->ioapic_id;
         io_apic->ioapic_addr = config->ioapic_base_address;
 
-        lapic = (struct acpi_20_madt_lapic *)(io_apic + 1);
+        end = (struct acpi_20_madt_lapic *)(io_apic + 1);
     }
     else
-        lapic = (struct acpi_20_madt_lapic *)(madt + 1);
+        end = (struct acpi_20_madt_lapic *)(madt + 1);
 
-    info->nr_cpus = hvminfo->nr_vcpus;
-    info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, lapic);
-    for ( i = 0; i < hvminfo->nr_vcpus; i++ )
-    {
-        memset(lapic, 0, sizeof(*lapic));
-        lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
-        lapic->length  = sizeof(*lapic);
-        /* Processor ID must match processor-object IDs in the DSDT. */
-        lapic->acpi_processor_id = i;
-        lapic->apic_id = config->lapic_id(i);
-        lapic->flags = (test_bit(i, hvminfo->vcpu_online)
-                        ? ACPI_LOCAL_APIC_ENABLED : 0);
-        lapic++;
+    if (hvminfo->nr_vcpus < 256) {
+        struct acpi_20_madt_lapic *lapic = (struct acpi_20_madt_lapic *)end;
+        info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, lapic);
+        for ( i = 0; i < hvminfo->nr_vcpus; i++ )
+        {
+            memset(lapic, 0, sizeof(*lapic));
+            lapic->type    = ACPI_PROCESSOR_LOCAL_APIC;
+            lapic->length  = sizeof(*lapic);
+            /* Processor ID must match processor-object IDs in the DSDT. */
+            lapic->acpi_processor_id = i;
+            lapic->apic_id = config->lapic_id(i);
+            lapic->flags = ((i < hvminfo->nr_vcpus) &&
+                            test_bit(i, hvminfo->vcpu_online)
+                            ? ACPI_LOCAL_APIC_ENABLED : 0);
+            lapic++;
+        }
+        end = lapic;
+    } else {
+        struct acpi_20_madt_x2apic *lapic = (struct acpi_20_madt_x2apic *)end;
+        info->madt_lapic0_addr = ctxt->mem_ops.v2p(ctxt, lapic);
+        for ( i = 0; i < hvminfo->nr_vcpus; i++ )
+        {
+            memset(lapic, 0, sizeof(*lapic));
+            lapic->type    = ACPI_PROCESSOR_LOCAL_X2APIC;
+            lapic->length  = sizeof(*lapic);
+            /* Processor ID must match processor-object IDs in the DSDT. */
+            lapic->acpi_processor_id = i;
+            lapic->apic_id = config->lapic_id(i);
+            lapic->flags =  test_bit(i, hvminfo->vcpu_online)
+                            ? ACPI_LOCAL_APIC_ENABLED : 0;
+            lapic++;
+        }
+        end = lapic;
     }
 
-    madt->header.length = (unsigned char *)lapic - (unsigned char *)madt;
+    info->nr_cpus = hvminfo->nr_vcpus;
+    madt->header.length = (unsigned char *)end - (unsigned char *)madt;
     set_checksum(madt, offsetof(struct acpi_header, checksum),
                  madt->header.length);
     info->madt_csum_addr =
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t
  2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
                   ` (3 preceding siblings ...)
  2017-08-25  2:52 ` [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
@ 2017-08-25  2:52 ` Lan Tianyu
  2017-08-25  9:22   ` Wei Liu
  2017-08-25 14:10 ` [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Meng Xu
  5 siblings, 1 reply; 31+ messages in thread
From: Lan Tianyu @ 2017-08-25  2:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

This patch is to extend lapic_id() to support more vcpus.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/firmware/hvmloader/util.c | 2 +-
 tools/libacpi/libacpi.h         | 2 +-
 tools/libxl/libxl_x86_acpi.c    | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index db5f240..814ac2e 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -883,7 +883,7 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt,
     /* ACPI builder currently doesn't free memory so this is just a stub */
 }
 
-static uint8_t acpi_lapic_id(unsigned cpu)
+static uint32_t acpi_lapic_id(unsigned cpu)
 {
     return LAPIC_ID(cpu);
 }
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index 74778a5..0b04cbc 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -93,7 +93,7 @@ struct acpi_config {
     unsigned long rsdp;
 
     /* x86-specific parameters */
-    uint8_t (*lapic_id)(unsigned cpu);
+    uint32_t (*lapic_id)(unsigned cpu);
     uint32_t lapic_base_address;
     uint32_t ioapic_base_address;
     uint16_t pci_isa_irq_mask;
diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
index 1fa97ff..8fe084d 100644
--- a/tools/libxl/libxl_x86_acpi.c
+++ b/tools/libxl/libxl_x86_acpi.c
@@ -85,7 +85,7 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt,
 {
 }
 
-static uint8_t acpi_lapic_id(unsigned cpu)
+static uint32_t acpi_lapic_id(unsigned cpu)
 {
     return cpu * 2;
 }
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support
  2017-08-25  2:52 ` [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support Lan Tianyu
@ 2017-08-25  9:14   ` Wei Liu
  2017-08-28  8:53     ` Lan, Tianyu
  0 siblings, 1 reply; 31+ messages in thread
From: Wei Liu @ 2017-08-25  9:14 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, george.dunlap, andrew.cooper3, ian.jackson,
	xen-devel, jbeulich, chao.gao

On Thu, Aug 24, 2017 at 10:52:16PM -0400, Lan Tianyu wrote:
> This patch is to increase hap size to support more vcpus in single VM.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>

Can we maybe derive the number of pages needed from the number of vcpus?

Bumping this value unconditionally is going to increase memory
consumption.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-25  2:52 ` [RFC PATCH 2/5] XL: Increase event channels to support more vcpus Lan Tianyu
@ 2017-08-25  9:18   ` Wei Liu
  2017-08-25  9:57     ` Roger Pau Monné
  0 siblings, 1 reply; 31+ messages in thread
From: Wei Liu @ 2017-08-25  9:18 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, Aug 24, 2017 at 10:52:17PM -0400, Lan Tianyu wrote:
> This patch is to increase event channels to support more vcpus in single VM.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>

There is no need to bump the default. There is already a configuration
option called max_event_channel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t
  2017-08-25  2:52 ` [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t Lan Tianyu
@ 2017-08-25  9:22   ` Wei Liu
  0 siblings, 0 replies; 31+ messages in thread
From: Wei Liu @ 2017-08-25  9:22 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, Chao Gao

On Thu, Aug 24, 2017 at 10:52:20PM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> This patch is to extend lapic_id() to support more vcpus.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>

This looks reasonable.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25  2:52 ` [RFC PATCH 3/5] Tool/ACPI: DSDT extension " Lan Tianyu
@ 2017-08-25  9:25   ` Wei Liu
  2017-08-28  9:12     ` Lan, Tianyu
  2017-08-25 10:36   ` Roger Pau Monné
  1 sibling, 1 reply; 31+ messages in thread
From: Wei Liu @ 2017-08-25  9:25 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, Aug 24, 2017 at 10:52:18PM -0400, Lan Tianyu wrote:
> This patch is to change DSDT table for processor object to support >255 vcpus.
> 

Can you provide a link to the spec so people can check if you
modification is correct?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build
  2017-08-25  2:52 ` [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
@ 2017-08-25  9:26   ` Wei Liu
  2017-08-25  9:43     ` Jan Beulich
  2017-08-25 10:11   ` Roger Pau Monné
  1 sibling, 1 reply; 31+ messages in thread
From: Wei Liu @ 2017-08-25  9:26 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, Aug 24, 2017 at 10:52:19PM -0400, Lan Tianyu wrote:
> This patch is to add x2apic entry support for ACPI MADT table.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> Signed-off-by: Chao Gao <chao.gao@intel.com>

Again, please provide spec.

There are a few coding style issues in code btw.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build
  2017-08-25  9:26   ` Wei Liu
@ 2017-08-25  9:43     ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2017-08-25  9:43 UTC (permalink / raw)
  To: wei.liu2
  Cc: Lan Tianyu, kevin.tian, andrew.cooper3, ian.jackson, xen-devel, chao.gao

>>> On 25.08.17 at 11:26, <wei.liu2@citrix.com> wrote:
> On Thu, Aug 24, 2017 at 10:52:19PM -0400, Lan Tianyu wrote:
>> This patch is to add x2apic entry support for ACPI MADT table.
>> 
>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>> Signed-off-by: Chao Gao <chao.gao@intel.com>
> 
> Again, please provide spec.

I'd expect this to be the ACPI spec; I don't think links should be
needed for such fundamental documentation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-25  9:18   ` Wei Liu
@ 2017-08-25  9:57     ` Roger Pau Monné
  2017-08-25 10:04       ` Wei Liu
  0 siblings, 1 reply; 31+ messages in thread
From: Roger Pau Monné @ 2017-08-25  9:57 UTC (permalink / raw)
  To: Wei Liu
  Cc: Lan Tianyu, kevin.tian, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Fri, Aug 25, 2017 at 10:18:24AM +0100, Wei Liu wrote:
> On Thu, Aug 24, 2017 at 10:52:17PM -0400, Lan Tianyu wrote:
> > This patch is to increase event channels to support more vcpus in single VM.
> > 
> > Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> 
> There is no need to bump the default. There is already a configuration
> option called max_event_channel.

Maybe make this somehow based on the number of vCPUs assigned to the
domain?

It's not very used-friendly to allow the creation of a domain with 256
vCPUs for example that would then get stuck during boot.

Or at least check max_event_channel and the number of vCPUs and print
a warning message to alert the user that things might go wrong with
this configuration.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-25  9:57     ` Roger Pau Monné
@ 2017-08-25 10:04       ` Wei Liu
  2017-08-28  9:11         ` Lan, Tianyu
  0 siblings, 1 reply; 31+ messages in thread
From: Wei Liu @ 2017-08-25 10:04 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Lan Tianyu, kevin.tian, Wei Liu, andrew.cooper3, ian.jackson,
	xen-devel, jbeulich, chao.gao

On Fri, Aug 25, 2017 at 10:57:26AM +0100, Roger Pau Monné wrote:
> On Fri, Aug 25, 2017 at 10:18:24AM +0100, Wei Liu wrote:
> > On Thu, Aug 24, 2017 at 10:52:17PM -0400, Lan Tianyu wrote:
> > > This patch is to increase event channels to support more vcpus in single VM.
> > > 
> > > Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> > 
> > There is no need to bump the default. There is already a configuration
> > option called max_event_channel.
> 
> Maybe make this somehow based on the number of vCPUs assigned to the
> domain?
> 
> It's not very used-friendly to allow the creation of a domain with 256
> vCPUs for example that would then get stuck during boot.
> 
> Or at least check max_event_channel and the number of vCPUs and print
> a warning message to alert the user that things might go wrong with
> this configuration.
> 

The problem is number of vcpu is only one factor that would affect the
number of event channels needed.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build
  2017-08-25  2:52 ` [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
  2017-08-25  9:26   ` Wei Liu
@ 2017-08-25 10:11   ` Roger Pau Monné
  2017-08-29  3:14     ` Lan Tianyu
  1 sibling, 1 reply; 31+ messages in thread
From: Roger Pau Monné @ 2017-08-25 10:11 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, Aug 24, 2017 at 10:52:19PM -0400, Lan Tianyu wrote:
> This patch is to add x2apic entry support for ACPI MADT table.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> ---
>  tools/libacpi/acpi2_0.h | 10 ++++++++
>  tools/libacpi/build.c   | 61 ++++++++++++++++++++++++++++++++++---------------
>  2 files changed, 53 insertions(+), 18 deletions(-)
> 
> diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
> index 758a823..ff18b3e 100644
> --- a/tools/libacpi/acpi2_0.h
> +++ b/tools/libacpi/acpi2_0.h
> @@ -322,6 +322,7 @@ struct acpi_20_waet {
>  #define ACPI_IO_SAPIC                       0x06
>  #define ACPI_PROCESSOR_LOCAL_SAPIC          0x07
>  #define ACPI_PLATFORM_INTERRUPT_SOURCES     0x08
> +#define ACPI_PROCESSOR_LOCAL_X2APIC         0x09
>  
>  /*
>   * APIC Structure Definitions.
> @@ -338,6 +339,15 @@ struct acpi_20_madt_lapic {
>      uint32_t flags;
>  };
>  
> +struct acpi_20_madt_x2apic {
> +    uint8_t  type;
> +    uint8_t  length;
> +    uint16_t reserved;		    /* reserved - must be zero */
> +    uint32_t apic_id;           /* Processor x2APIC ID  */
> +    uint32_t flags;
> +    uint32_t acpi_processor_id;	/* ACPI processor UID */

There's a mix of tabs and spaces above.

> +};
> +
>  /*
>   * Local APIC Flags.  All other bits are reserved and must be 0.
>   */
> diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
> index c7cc784..36e582a 100644
> --- a/tools/libacpi/build.c
> +++ b/tools/libacpi/build.c
> @@ -82,9 +82,9 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
>      struct acpi_20_madt           *madt;
>      struct acpi_20_madt_intsrcovr *intsrcovr;
>      struct acpi_20_madt_ioapic    *io_apic;
> -    struct acpi_20_madt_lapic     *lapic;
>      const struct hvm_info_table   *hvminfo = config->hvminfo;
>      int i, sz;
> +    void *end;
>  
>      if ( config->lapic_id == NULL )
>          return NULL;
> @@ -92,7 +92,11 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
>      sz  = sizeof(struct acpi_20_madt);
>      sz += sizeof(struct acpi_20_madt_intsrcovr) * 16;
>      sz += sizeof(struct acpi_20_madt_ioapic);
> -    sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
> +
> +    if (hvminfo->nr_vcpus < 256)
> +        sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
> +    else
> +        sz += sizeof(struct acpi_20_madt_x2apic) * hvminfo->nr_vcpus;

This is wrong, APIC ID is cpu id * 2, so the limit here needs to be
128, not 256. Also this should be set as a constant somewhere.

Apart from that, although this is technically correct, I would rather
prefer the first 128 vCPUs to have xAPIC entries, and APIC IDs > 254
to use x2APIC entries. This will allow a guest without x2APIC support
to still boot on VMs > 128 vCPUs, although they won't be able to use
the extra CPUs. IIRC this is in line with what bare metal does.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25  2:52 ` [RFC PATCH 3/5] Tool/ACPI: DSDT extension " Lan Tianyu
  2017-08-25  9:25   ` Wei Liu
@ 2017-08-25 10:36   ` Roger Pau Monné
  2017-08-25 12:01     ` Jan Beulich
  2017-08-29  5:01     ` Lan Tianyu
  1 sibling, 2 replies; 31+ messages in thread
From: Roger Pau Monné @ 2017-08-25 10:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, Aug 24, 2017 at 10:52:18PM -0400, Lan Tianyu wrote:
> This patch is to change DSDT table for processor object to support >255 vcpus.

The note in ACPI 6.1A spec section 5.2.12.12 contains the following:

[Compatibility note] On some legacy OSes, Logical processors with APIC
ID values less than 255 (whether in XAPIC or X2APIC mode) must use the
Processor Local APIC structure to convey their APIC information to
OSPM, and those processors must be declared in the DSDT using the
Processor() keyword. Logical processors with APIC ID values 255 and
greater must use the Processor Local x2APIC structure and be declared
using the Device() keyword. See Section 19.6.102 "Processor (Declare
Processor)" for more information.

So you cannot unconditionally switch to using the Device for all
processors.

vCPUs <= 128 need to use the Processor keyword, while vCPUs > 128 need
to use the Device keyword.

FWIW the code below to create the Devices looks fine to me.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25 10:36   ` Roger Pau Monné
@ 2017-08-25 12:01     ` Jan Beulich
  2017-08-29  4:58       ` Lan Tianyu
  2017-08-29  5:01     ` Lan Tianyu
  1 sibling, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2017-08-25 12:01 UTC (permalink / raw)
  To: Roger Pau Monné, Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel, chao.gao

>>> On 25.08.17 at 12:36, <roger.pau@citrix.com> wrote:
> On Thu, Aug 24, 2017 at 10:52:18PM -0400, Lan Tianyu wrote:
>> This patch is to change DSDT table for processor object to support >255 
> vcpus.
> 
> The note in ACPI 6.1A spec section 5.2.12.12 contains the following:
> 
> [Compatibility note] On some legacy OSes, Logical processors with APIC
> ID values less than 255 (whether in XAPIC or X2APIC mode) must use the
> Processor Local APIC structure to convey their APIC information to
> OSPM, and those processors must be declared in the DSDT using the
> Processor() keyword. Logical processors with APIC ID values 255 and
> greater must use the Processor Local x2APIC structure and be declared
> using the Device() keyword. See Section 19.6.102 "Processor (Declare
> Processor)" for more information.
> 
> So you cannot unconditionally switch to using the Device for all
> processors.
> 
> vCPUs <= 128 need to use the Processor keyword, while vCPUs > 128 need
> to use the Device keyword.

While changing this code, may I suggest to stop referring to the
128 vCPU boundary? The decision should be solely based on
LAPIC ID, such that the only place to change later on will end up
being the one where it gets set to double the vCPU number.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
  2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
                   ` (4 preceding siblings ...)
  2017-08-25  2:52 ` [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t Lan Tianyu
@ 2017-08-25 14:10 ` Meng Xu
  2017-08-29  4:38   ` Lan Tianyu
  5 siblings, 1 reply; 31+ messages in thread
From: Meng Xu @ 2017-08-25 14:10 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	xen-devel, Jan Beulich, chao.gao

Hi Tianyu,

On Thu, Aug 24, 2017 at 10:52 PM, Lan Tianyu <tianyu.lan@intel.com> wrote:
>
> This patchset is to extend some resources(i.e, event channel,
> hap and so) to support more vcpus for single VM.
>
>
> Chao Gao (1):
>   xl/libacpi: extend lapic_id() to uint32_t
>
> Lan Tianyu (4):
>   xen/hap: Increase hap size for more vcpus support
>   XL: Increase event channels to support more vcpus
>   Tool/ACPI: DSDT extension to support more vcpus
>   hvmload: Add x2apic entry support in the MADT build
>
>  tools/firmware/hvmloader/util.c |  2 +-
>  tools/libacpi/acpi2_0.h         | 10 +++++++
>  tools/libacpi/build.c           | 61 +++++++++++++++++++++++++++++------------
>  tools/libacpi/libacpi.h         |  2 +-
>  tools/libacpi/mk_dsdt.c         | 11 ++++----
>  tools/libxl/libxl_create.c      |  2 +-
>  tools/libxl/libxl_x86_acpi.c    |  2 +-
>  xen/arch/x86/mm/hap/hap.c       |  2 +-
>  8 files changed, 63 insertions(+), 29 deletions(-)


How many VCPUs for a single VM do you want to support with this patch set?

Thanks,

Meng
-- 
-----------
Meng Xu
PhD Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support
  2017-08-25  9:14   ` Wei Liu
@ 2017-08-28  8:53     ` Lan, Tianyu
  0 siblings, 0 replies; 31+ messages in thread
From: Lan, Tianyu @ 2017-08-28  8:53 UTC (permalink / raw)
  To: Wei Liu
  Cc: kevin.tian, george.dunlap, andrew.cooper3, ian.jackson,
	xen-devel, jbeulich, chao.gao

On 8/25/2017 5:14 PM, Wei Liu wrote:
> On Thu, Aug 24, 2017 at 10:52:16PM -0400, Lan Tianyu wrote:
>> This patch is to increase hap size to support more vcpus in single VM.
>>
>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> 
> Can we maybe derive the number of pages needed from the number of vcpus?
>

Yes, we can add check of vcpu number here.

> Bumping this value unconditionally is going to increase memory
> consumption.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-25 10:04       ` Wei Liu
@ 2017-08-28  9:11         ` Lan, Tianyu
  2017-08-28  9:21           ` Wei Liu
  2017-08-28  9:22           ` Jan Beulich
  0 siblings, 2 replies; 31+ messages in thread
From: Lan, Tianyu @ 2017-08-28  9:11 UTC (permalink / raw)
  To: Wei Liu, Roger Pau Monné
  Cc: kevin.tian, andrew.cooper3, ian.jackson, xen-devel, jbeulich, chao.gao

On 8/25/2017 6:04 PM, Wei Liu wrote:
> On Fri, Aug 25, 2017 at 10:57:26AM +0100, Roger Pau Monné wrote:
>> On Fri, Aug 25, 2017 at 10:18:24AM +0100, Wei Liu wrote:
>>> On Thu, Aug 24, 2017 at 10:52:17PM -0400, Lan Tianyu wrote:
>>>> This patch is to increase event channels to support more vcpus in single VM.
>>>>
>>>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>>>
>>> There is no need to bump the default. There is already a configuration
>>> option called max_event_channel.
>>
>> Maybe make this somehow based on the number of vCPUs assigned to the
>> domain?
>>
>> It's not very used-friendly to allow the creation of a domain with 256
>> vCPUs for example that would then get stuck during boot.
>>
>> Or at least check max_event_channel and the number of vCPUs and print
>> a warning message to alert the user that things might go wrong with
>> this configuration.
>>
> 
> The problem is number of vcpu is only one factor that would affect the
> number of event channels needed.

How about producing a warning about event channel maybe not enough when 
vcpu number is >128 and still uses default max event channel number?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25  9:25   ` Wei Liu
@ 2017-08-28  9:12     ` Lan, Tianyu
  0 siblings, 0 replies; 31+ messages in thread
From: Lan, Tianyu @ 2017-08-28  9:12 UTC (permalink / raw)
  To: Wei Liu
  Cc: kevin.tian, andrew.cooper3, ian.jackson, xen-devel, jbeulich, chao.gao

On 8/25/2017 5:25 PM, Wei Liu wrote:
> On Thu, Aug 24, 2017 at 10:52:18PM -0400, Lan Tianyu wrote:
>> This patch is to change DSDT table for processor object to support >255 vcpus.
>>
> 
> Can you provide a link to the spec so people can check if you
> modification is correct?
>

OK. Will add in the next version.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-28  9:11         ` Lan, Tianyu
@ 2017-08-28  9:21           ` Wei Liu
  2017-08-28  9:22           ` Jan Beulich
  1 sibling, 0 replies; 31+ messages in thread
From: Wei Liu @ 2017-08-28  9:21 UTC (permalink / raw)
  To: Lan, Tianyu
  Cc: kevin.tian, Wei Liu, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao, Roger Pau Monné

On Mon, Aug 28, 2017 at 05:11:27PM +0800, Lan, Tianyu wrote:
> On 8/25/2017 6:04 PM, Wei Liu wrote:
> > On Fri, Aug 25, 2017 at 10:57:26AM +0100, Roger Pau Monné wrote:
> > > On Fri, Aug 25, 2017 at 10:18:24AM +0100, Wei Liu wrote:
> > > > On Thu, Aug 24, 2017 at 10:52:17PM -0400, Lan Tianyu wrote:
> > > > > This patch is to increase event channels to support more vcpus in single VM.
> > > > > 
> > > > > Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> > > > 
> > > > There is no need to bump the default. There is already a configuration
> > > > option called max_event_channel.
> > > 
> > > Maybe make this somehow based on the number of vCPUs assigned to the
> > > domain?
> > > 
> > > It's not very used-friendly to allow the creation of a domain with 256
> > > vCPUs for example that would then get stuck during boot.
> > > 
> > > Or at least check max_event_channel and the number of vCPUs and print
> > > a warning message to alert the user that things might go wrong with
> > > this configuration.
> > > 
> > 
> > The problem is number of vcpu is only one factor that would affect the
> > number of event channels needed.
> 
> How about producing a warning about event channel maybe not enough when vcpu
> number is >128 and still uses default max event channel number?
> 

Maybe. If you're going to do that, please:

1. provide the heuristic in a function so that it can be expand later.
2. make the message system administrator friendly, point them to the
   max_event_channels option.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 2/5] XL: Increase event channels to support more vcpus
  2017-08-28  9:11         ` Lan, Tianyu
  2017-08-28  9:21           ` Wei Liu
@ 2017-08-28  9:22           ` Jan Beulich
  1 sibling, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2017-08-28  9:22 UTC (permalink / raw)
  To: Tianyu Lan
  Cc: kevin.tian, Wei Liu, andrew.cooper3, ian.jackson, xen-devel,
	Roger Pau Monné,
	chao.gao

>>> On 28.08.17 at 11:11, <tianyu.lan@intel.com> wrote:
> On 8/25/2017 6:04 PM, Wei Liu wrote:
>> On Fri, Aug 25, 2017 at 10:57:26AM +0100, Roger Pau Monné wrote:
>>> On Fri, Aug 25, 2017 at 10:18:24AM +0100, Wei Liu wrote:
>>>> On Thu, Aug 24, 2017 at 10:52:17PM -0400, Lan Tianyu wrote:
>>>>> This patch is to increase event channels to support more vcpus in single VM.
>>>>>
>>>>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>>>>
>>>> There is no need to bump the default. There is already a configuration
>>>> option called max_event_channel.
>>>
>>> Maybe make this somehow based on the number of vCPUs assigned to the
>>> domain?
>>>
>>> It's not very used-friendly to allow the creation of a domain with 256
>>> vCPUs for example that would then get stuck during boot.
>>>
>>> Or at least check max_event_channel and the number of vCPUs and print
>>> a warning message to alert the user that things might go wrong with
>>> this configuration.
>>>
>> 
>> The problem is number of vcpu is only one factor that would affect the
>> number of event channels needed.
> 
> How about producing a warning about event channel maybe not enough when 
> vcpu number is >128 and still uses default max event channel number?

There would be nothing wrong with that for a guest not using
PV drivers, or not requiring multiple channels per vCPU. Hence
I'm not convinced a warning is appropriate here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build
  2017-08-25 10:11   ` Roger Pau Monné
@ 2017-08-29  3:14     ` Lan Tianyu
  0 siblings, 0 replies; 31+ messages in thread
From: Lan Tianyu @ 2017-08-29  3:14 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On 2017年08月25日 18:11, Roger Pau Monné wrote:
> On Thu, Aug 24, 2017 at 10:52:19PM -0400, Lan Tianyu wrote:
>> This patch is to add x2apic entry support for ACPI MADT table.
>>
>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>> ---
>>  tools/libacpi/acpi2_0.h | 10 ++++++++
>>  tools/libacpi/build.c   | 61 ++++++++++++++++++++++++++++++++++---------------
>>  2 files changed, 53 insertions(+), 18 deletions(-)
>>
>> diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
>> index 758a823..ff18b3e 100644
>> --- a/tools/libacpi/acpi2_0.h
>> +++ b/tools/libacpi/acpi2_0.h
>> @@ -322,6 +322,7 @@ struct acpi_20_waet {
>>  #define ACPI_IO_SAPIC                       0x06
>>  #define ACPI_PROCESSOR_LOCAL_SAPIC          0x07
>>  #define ACPI_PLATFORM_INTERRUPT_SOURCES     0x08
>> +#define ACPI_PROCESSOR_LOCAL_X2APIC         0x09
>>  
>>  /*
>>   * APIC Structure Definitions.
>> @@ -338,6 +339,15 @@ struct acpi_20_madt_lapic {
>>      uint32_t flags;
>>  };
>>  
>> +struct acpi_20_madt_x2apic {
>> +    uint8_t  type;
>> +    uint8_t  length;
>> +    uint16_t reserved;		    /* reserved - must be zero */
>> +    uint32_t apic_id;           /* Processor x2APIC ID  */
>> +    uint32_t flags;
>> +    uint32_t acpi_processor_id;	/* ACPI processor UID */
> 
> There's a mix of tabs and spaces above.
> 
>> +};
>> +
>>  /*
>>   * Local APIC Flags.  All other bits are reserved and must be 0.
>>   */
>> diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
>> index c7cc784..36e582a 100644
>> --- a/tools/libacpi/build.c
>> +++ b/tools/libacpi/build.c
>> @@ -82,9 +82,9 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
>>      struct acpi_20_madt           *madt;
>>      struct acpi_20_madt_intsrcovr *intsrcovr;
>>      struct acpi_20_madt_ioapic    *io_apic;
>> -    struct acpi_20_madt_lapic     *lapic;
>>      const struct hvm_info_table   *hvminfo = config->hvminfo;
>>      int i, sz;
>> +    void *end;
>>  
>>      if ( config->lapic_id == NULL )
>>          return NULL;
>> @@ -92,7 +92,11 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
>>      sz  = sizeof(struct acpi_20_madt);
>>      sz += sizeof(struct acpi_20_madt_intsrcovr) * 16;
>>      sz += sizeof(struct acpi_20_madt_ioapic);
>> -    sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
>> +
>> +    if (hvminfo->nr_vcpus < 256)
>> +        sz += sizeof(struct acpi_20_madt_lapic) * hvminfo->nr_vcpus;
>> +    else
>> +        sz += sizeof(struct acpi_20_madt_x2apic) * hvminfo->nr_vcpus;
> 
> This is wrong, APIC ID is cpu id * 2, so the limit here needs to be
> 128, not 256. Also this should be set as a constant somewhere.

Sorry. We made APIC ID was vcpu id in our internal repo and didn't send
out. Will change this in next version.

> 
> Apart from that, although this is technically correct, I would rather
> prefer the first 128 vCPUs to have xAPIC entries, and APIC IDs > 254
> to use x2APIC entries. This will allow a guest without x2APIC support
> to still boot on VMs > 128 vCPUs, although they won't be able to use
> the extra CPUs. IIRC this is in line with what bare metal does.

OK. Will update.

> 
> Roger.
> 


-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
  2017-08-25 14:10 ` [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Meng Xu
@ 2017-08-29  4:38   ` Lan Tianyu
  2017-08-29  8:49     ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Lan Tianyu @ 2017-08-29  4:38 UTC (permalink / raw)
  To: Meng Xu
  Cc: kevin.tian, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	xen-devel, Jan Beulich, chao.gao

On 2017年08月25日 22:10, Meng Xu wrote:
> Hi Tianyu,
> 
> On Thu, Aug 24, 2017 at 10:52 PM, Lan Tianyu <tianyu.lan@intel.com> wrote:
>>
>> This patchset is to extend some resources(i.e, event channel,
>> hap and so) to support more vcpus for single VM.
>>
>>
>> Chao Gao (1):
>>   xl/libacpi: extend lapic_id() to uint32_t
>>
>> Lan Tianyu (4):
>>   xen/hap: Increase hap size for more vcpus support
>>   XL: Increase event channels to support more vcpus
>>   Tool/ACPI: DSDT extension to support more vcpus
>>   hvmload: Add x2apic entry support in the MADT build
>>
>>  tools/firmware/hvmloader/util.c |  2 +-
>>  tools/libacpi/acpi2_0.h         | 10 +++++++
>>  tools/libacpi/build.c           | 61 +++++++++++++++++++++++++++++------------
>>  tools/libacpi/libacpi.h         |  2 +-
>>  tools/libacpi/mk_dsdt.c         | 11 ++++----
>>  tools/libxl/libxl_create.c      |  2 +-
>>  tools/libxl/libxl_x86_acpi.c    |  2 +-
>>  xen/arch/x86/mm/hap/hap.c       |  2 +-
>>  8 files changed, 63 insertions(+), 29 deletions(-)
> 
> 
> How many VCPUs for a single VM do you want to support with this patch set?

Hi Meng:
	Sorry for later response. We hope to increase max vcpu number to 512.
This also have dependency on other jobs(i.e, cpu topology, mult page
support for ioreq server and virtual IOMMU).

-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25 12:01     ` Jan Beulich
@ 2017-08-29  4:58       ` Lan Tianyu
  0 siblings, 0 replies; 31+ messages in thread
From: Lan Tianyu @ 2017-08-29  4:58 UTC (permalink / raw)
  To: Jan Beulich, Roger Pau Monné
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel, chao.gao

On 2017年08月25日 20:01, Jan Beulich wrote:
>>>> On 25.08.17 at 12:36, <roger.pau@citrix.com> wrote:
>> On Thu, Aug 24, 2017 at 10:52:18PM -0400, Lan Tianyu wrote:
>>> This patch is to change DSDT table for processor object to support >255 
>> vcpus.
>>
>> The note in ACPI 6.1A spec section 5.2.12.12 contains the following:
>>
>> [Compatibility note] On some legacy OSes, Logical processors with APIC
>> ID values less than 255 (whether in XAPIC or X2APIC mode) must use the
>> Processor Local APIC structure to convey their APIC information to
>> OSPM, and those processors must be declared in the DSDT using the
>> Processor() keyword. Logical processors with APIC ID values 255 and
>> greater must use the Processor Local x2APIC structure and be declared
>> using the Device() keyword. See Section 19.6.102 "Processor (Declare
>> Processor)" for more information.
>>
>> So you cannot unconditionally switch to using the Device for all
>> processors.
>>
>> vCPUs <= 128 need to use the Processor keyword, while vCPUs > 128 need
>> to use the Device keyword.
> 
> While changing this code, may I suggest to stop referring to the
> 128 vCPU boundary? The decision should be solely based on
> LAPIC ID, such that the only place to change later on will end up
> being the one where it gets set to double the vCPU number.
> 

OK. Got it.

-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 3/5] Tool/ACPI: DSDT extension to support more vcpus
  2017-08-25 10:36   ` Roger Pau Monné
  2017-08-25 12:01     ` Jan Beulich
@ 2017-08-29  5:01     ` Lan Tianyu
  1 sibling, 0 replies; 31+ messages in thread
From: Lan Tianyu @ 2017-08-29  5:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On 2017年08月25日 18:36, Roger Pau Monné wrote:
> On Thu, Aug 24, 2017 at 10:52:18PM -0400, Lan Tianyu wrote:
>> This patch is to change DSDT table for processor object to support >255 vcpus.
> 
> The note in ACPI 6.1A spec section 5.2.12.12 contains the following:
> 
> [Compatibility note] On some legacy OSes, Logical processors with APIC
> ID values less than 255 (whether in XAPIC or X2APIC mode) must use the
> Processor Local APIC structure to convey their APIC information to
> OSPM, and those processors must be declared in the DSDT using the
> Processor() keyword. Logical processors with APIC ID values 255 and
> greater must use the Processor Local x2APIC structure and be declared
> using the Device() keyword. See Section 19.6.102 "Processor (Declare
> Processor)" for more information.
> 
> So you cannot unconditionally switch to using the Device for all
> processors.
> 
> vCPUs <= 128 need to use the Processor keyword, while vCPUs > 128 need
> to use the Device keyword.

Yes, that's right and will fix.
-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
  2017-08-29  4:38   ` Lan Tianyu
@ 2017-08-29  8:49     ` Jan Beulich
  2017-08-30  5:33       ` Lan Tianyu
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2017-08-29  8:49 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	xen-devel, Meng Xu, chao.gao

>>> On 29.08.17 at 06:38, <tianyu.lan@intel.com> wrote:
> On 2017年08月25日 22:10, Meng Xu wrote:
>> How many VCPUs for a single VM do you want to support with this patch set?
> 
> Hi Meng:
> 	Sorry for later response. We hope to increase max vcpu number to 512.
> This also have dependency on other jobs(i.e, cpu topology, mult page
> support for ioreq server and virtual IOMMU).

I'm sorry for repeating this, but your first and foremost goal ought
to be to address the known issues with VMs having up to 128
vCPU-s; Andrew has been pointing this out in the past. I see no
point in pushing up the limit if even the current limit doesn't work
reliably in all cases.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
  2017-08-29  8:49     ` Jan Beulich
@ 2017-08-30  5:33       ` Lan Tianyu
  2017-08-30  7:12         ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Lan Tianyu @ 2017-08-30  5:33 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper
  Cc: kevin.tian, Wei Liu, George Dunlap, Ian Jackson, xen-devel,
	Meng Xu, chao.gao

On 2017年08月29日 16:49, Jan Beulich wrote:
>>>> On 29.08.17 at 06:38, <tianyu.lan@intel.com> wrote:
>> On 2017年08月25日 22:10, Meng Xu wrote:
>>> How many VCPUs for a single VM do you want to support with this patch set?
>>
>> Hi Meng:
>> 	Sorry for later response. We hope to increase max vcpu number to 512.
>> This also have dependency on other jobs(i.e, cpu topology, mult page
>> support for ioreq server and virtual IOMMU).
> 
> I'm sorry for repeating this, but your first and foremost goal ought
> to be to address the known issues with VMs having up to 128
> vCPU-s; Andrew has been pointing this out in the past. I see no
> point in pushing up the limit if even the current limit doesn't work
> reliably in all cases.
> 

Hi Jan & Andrew:
	We ran some HPC benchmark(i.e, HPlinkpack, dgemm, sgemm, igemm and so
on) in a huge VM with 128 vcpus(Even >255 vcpus with non-upstreamed
patches) and didn't meet unreliable issue. These benchmarks run heavy
workloads in VM and some of them even last several hours.

-- 
Best regards
Tianyu Lan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
  2017-08-30  5:33       ` Lan Tianyu
@ 2017-08-30  7:12         ` Jan Beulich
  2017-08-30  9:18           ` George Dunlap
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2017-08-30  7:12 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	xen-devel, Meng Xu, chao.gao

>>> On 30.08.17 at 07:33, <tianyu.lan@intel.com> wrote:
> On 2017年08月29日 16:49, Jan Beulich wrote:
>>>>> On 29.08.17 at 06:38, <tianyu.lan@intel.com> wrote:
>>> On 2017年08月25日 22:10, Meng Xu wrote:
>>>> How many VCPUs for a single VM do you want to support with this patch set?
>>>
>>> Hi Meng:
>>> 	Sorry for later response. We hope to increase max vcpu number to 512.
>>> This also have dependency on other jobs(i.e, cpu topology, mult page
>>> support for ioreq server and virtual IOMMU).
>> 
>> I'm sorry for repeating this, but your first and foremost goal ought
>> to be to address the known issues with VMs having up to 128
>> vCPU-s; Andrew has been pointing this out in the past. I see no
>> point in pushing up the limit if even the current limit doesn't work
>> reliably in all cases.
>> 
> 
> Hi Jan & Andrew:
> 	We ran some HPC benchmark(i.e, HPlinkpack, dgemm, sgemm, igemm and so
> on) in a huge VM with 128 vcpus(Even >255 vcpus with non-upstreamed
> patches) and didn't meet unreliable issue. These benchmarks run heavy
> workloads in VM and some of them even last several hours.

I guess it heavily depends on what portions of hypervisor code
those benchmarks exercise. Compute-intensives ones (which
seems a likely case for HPC) aren't that interesting. Ones putting
high pressure on e.g. the p2m lock, or ones causing high IPI rates
(inside the guest) are likely to be more problematic.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
  2017-08-30  7:12         ` Jan Beulich
@ 2017-08-30  9:18           ` George Dunlap
  0 siblings, 0 replies; 31+ messages in thread
From: George Dunlap @ 2017-08-30  9:18 UTC (permalink / raw)
  To: Jan Beulich, Lan Tianyu
  Cc: kevin.tian, Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson,
	xen-devel, Meng Xu, chao.gao

On 08/30/2017 08:12 AM, Jan Beulich wrote:
>>>> On 30.08.17 at 07:33, <tianyu.lan@intel.com> wrote:
>> On 2017年08月29日 16:49, Jan Beulich wrote:
>>>>>> On 29.08.17 at 06:38, <tianyu.lan@intel.com> wrote:
>>>> On 2017年08月25日 22:10, Meng Xu wrote:
>>>>> How many VCPUs for a single VM do you want to support with this patch set?
>>>>
>>>> Hi Meng:
>>>> 	Sorry for later response. We hope to increase max vcpu number to 512.
>>>> This also have dependency on other jobs(i.e, cpu topology, mult page
>>>> support for ioreq server and virtual IOMMU).
>>>
>>> I'm sorry for repeating this, but your first and foremost goal ought
>>> to be to address the known issues with VMs having up to 128
>>> vCPU-s; Andrew has been pointing this out in the past. I see no
>>> point in pushing up the limit if even the current limit doesn't work
>>> reliably in all cases.
>>>
>>
>> Hi Jan & Andrew:
>> 	We ran some HPC benchmark(i.e, HPlinkpack, dgemm, sgemm, igemm and so
>> on) in a huge VM with 128 vcpus(Even >255 vcpus with non-upstreamed
>> patches) and didn't meet unreliable issue. These benchmarks run heavy
>> workloads in VM and some of them even last several hours.
> 
> I guess it heavily depends on what portions of hypervisor code
> those benchmarks exercise. Compute-intensives ones (which
> seems a likely case for HPC) aren't that interesting. Ones putting
> high pressure on e.g. the p2m lock, or ones causing high IPI rates
> (inside the guest) are likely to be more problematic.

Right -- and so if Andy's assessment is accurate, it would be a security
issue to allow *untrusted* guests to run with such a hugh number of
vcpus.  But it seems to me like it would still be useful for people who
run only trusted guests to run with more vcpus, as long as they
understand the potential limitations.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-08-30  9:18 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
2017-08-25  2:52 ` [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support Lan Tianyu
2017-08-25  9:14   ` Wei Liu
2017-08-28  8:53     ` Lan, Tianyu
2017-08-25  2:52 ` [RFC PATCH 2/5] XL: Increase event channels to support more vcpus Lan Tianyu
2017-08-25  9:18   ` Wei Liu
2017-08-25  9:57     ` Roger Pau Monné
2017-08-25 10:04       ` Wei Liu
2017-08-28  9:11         ` Lan, Tianyu
2017-08-28  9:21           ` Wei Liu
2017-08-28  9:22           ` Jan Beulich
2017-08-25  2:52 ` [RFC PATCH 3/5] Tool/ACPI: DSDT extension " Lan Tianyu
2017-08-25  9:25   ` Wei Liu
2017-08-28  9:12     ` Lan, Tianyu
2017-08-25 10:36   ` Roger Pau Monné
2017-08-25 12:01     ` Jan Beulich
2017-08-29  4:58       ` Lan Tianyu
2017-08-29  5:01     ` Lan Tianyu
2017-08-25  2:52 ` [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
2017-08-25  9:26   ` Wei Liu
2017-08-25  9:43     ` Jan Beulich
2017-08-25 10:11   ` Roger Pau Monné
2017-08-29  3:14     ` Lan Tianyu
2017-08-25  2:52 ` [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t Lan Tianyu
2017-08-25  9:22   ` Wei Liu
2017-08-25 14:10 ` [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Meng Xu
2017-08-29  4:38   ` Lan Tianyu
2017-08-29  8:49     ` Jan Beulich
2017-08-30  5:33       ` Lan Tianyu
2017-08-30  7:12         ` Jan Beulich
2017-08-30  9:18           ` George Dunlap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.