All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3
@ 2015-06-01 12:56 Chen Baozi
  2015-06-01 12:56 ` [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest Chen Baozi
                   ` (11 more replies)
  0 siblings, 12 replies; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
to the fixed size of redistributor mmio region. Increasing the size
makes the number expand to 16 because of AFF0 restriction on GICv3.
To create a guest up to 128 vCPUs, which is the maxium number that GIC-500
can support, this patchset uses the AFF1 information to create a mapping
relation between vCPUID and vMPIDR and deals with the related issues.

These patches are written based upon Julien's "GICv2 on GICv3" series
and the IROUTER emulation cleanup patch.

Changes form V5:
* Rework gicv3_sgir_to_cpumask in #5
* Rework #8 to split arch_domain_create into two parts:
  - arch_domain_preinit to initialise vgic_ops before evtchn_init is
    called
  - the rest of logic remains in arch_domain_create
* Use a field value in struct vgic_ops instead of the function point
  for max_vcpus.
* Minor changes according to previous reviews.

Changes from V4:
* Split the patch 4/8 of V3 into two part:
  - Use cpumask_t type for vcpu_mask in vgic_to_sgi.
  - Use AFF1 when translating ICC_SGI1R_EL1 to cpumask.
* Use a more efficient algorithm when calculate cpumask.
* Add a patch to call arch_domain_create before evtchn_init, because
  evtchn_init needs vgic info which is initialised during
  acrh_domain_create.
* Get the max vcpu info from vgic_ops.
* Minor changes according to previous reviews.

Changes from V3:
* Drop the wrong patch that altering domain_max_vcpus to a macro.
* Change the domain_max_vcpus to return value accodring to the version
  of the vGIC in used.

Changes from V2:
* Reorder the patch which increases MAX_VIRT_CPUS to the last to make
  this series bisectable.
* Drop the dynamic re-distributor region allocation patch in tools.
* Use cpumask_t type instead of unsigned long in vgic_to_sgi and do the
  translation from GICD_SGIR to vcpu_mask in both vGICv2 and vGICv3.
* Make domain_max_vcpus be alias of max_vcpus in struct domain

Changes from V1:
* Use the way that expanding the GICR address space to support up to 128
  redistributor in guest memory layout rather than use the dynamic
  allocation.
* Add support to include AFF1 information in vMPIDR/logical CPUID.

Chen Baozi (10):
  xen/arm: gic-v3: Increase the size of GICR in address space for guest
  xen/arm: Add functions of mapping between vCPUID and virtual affinity
  xen/arm: Use the new functions for vCPUID/vaffinity transformation
  xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
  xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
  tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU
  xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity
  xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  xen/arm: make domain_max_vcpus return value from vgic_ops
  xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64

 tools/libxl/libxl_arm.c           | 14 ++++++-
 xen/arch/arm/domain.c             | 85 ++++++++++++++++++++++++++-------------
 xen/arch/arm/domain_build.c       | 14 +++++--
 xen/arch/arm/vgic-v2.c            | 19 +++++++--
 xen/arch/arm/vgic-v3.c            | 50 ++++++++++++++++++++---
 xen/arch/arm/vgic.c               | 45 +++++++++------------
 xen/arch/arm/vpsci.c              |  5 +--
 xen/arch/x86/domain.c             |  6 +++
 xen/common/domain.c               |  3 ++
 xen/include/asm-arm/config.h      |  4 ++
 xen/include/asm-arm/domain.h      | 42 ++++++++++++++++++-
 xen/include/asm-arm/gic.h         |  1 +
 xen/include/asm-arm/gic_v3_defs.h |  4 ++
 xen/include/asm-arm/vgic.h        |  4 +-
 xen/include/public/arch-arm.h     |  4 +-
 xen/include/xen/domain.h          |  2 +
 16 files changed, 226 insertions(+), 76 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 15:49   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 02/10] xen/arm: Add functions of mapping between vCPUID and virtual affinity Chen Baozi
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs, which is the maximum number that GIC-500 supports.

Signed-off-by: Chen Baozi <baozich@gmail.com>
Reviewed-by: Julien Grall <julien.grall@citrix.com>
---
 xen/include/public/arch-arm.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index c029e0f..ec0c261 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -388,8 +388,8 @@ struct xen_arch_domainconfig {
 #define GUEST_GICV3_RDIST_STRIDE   0x20000ULL
 #define GUEST_GICV3_RDIST_REGIONS  1
 
-#define GUEST_GICV3_GICR0_BASE     0x03020000ULL    /* vCPU0 - vCPU7 */
-#define GUEST_GICV3_GICR0_SIZE     0x00100000ULL
+#define GUEST_GICV3_GICR0_BASE     0x03020000ULL    /* vCPU0 - vCPU127 */
+#define GUEST_GICV3_GICR0_SIZE     0x01000000ULL
 
 /*
  * 16MB == 4096 pages reserved for guest to use as a region to map its
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 02/10] xen/arm: Add functions of mapping between vCPUID and virtual affinity
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
  2015-06-01 12:56 ` [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 15:54   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation Chen Baozi
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

GICv3 restricts that the maximum number of CPUs in affinity 0 (one
cluster) is 16. That is to say the upper 4 bits of affinity 0 is unused.
Current implementation considers that AFF0 is equal to vCPUID, which
makes all vCPUs in one cluster, limiting its number to 16. If we would
like to support more than 16 number of vCPU in one guest, we need to
make use of AFF1. Considering the unused upper 4 bits, we need to create
a pair of functions mapping the vCPUID and virtual affinity.

Signed-off-by: Chen Baozi <baozich@gmail.com>
Reviewed-by: Julien Grall <julien.grall@citrix.com>
---
 xen/include/asm-arm/domain.h | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 75b17af..b7b5cd2 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -266,6 +266,47 @@ static inline unsigned int domain_max_vcpus(const struct domain *d)
     return MAX_VIRT_CPUS;
 }
 
+/*
+ * Due to the restriction of GICv3, the number of vCPUs in AFF0 is
+ * limited to 16, thus only the first 4 bits of AFF0 are legal. We will
+ * use the first 2 affinity levels here, expanding the number of vCPU up
+ * to 4096 (16*256), which is more than 128 PEs that GIC-500 supports.
+ *
+ * Since we don't save information of vCPU's topology (affinity) in
+ * vMPIDR at the moment, we map the vcpuid to the vMPIDR linearly.
+ *
+ * XXX: We may have multi-threading or virtual cluster information in
+ * the future.
+ */
+static inline unsigned int vaffinity_to_vcpuid(register_t vaff)
+{
+    unsigned int vcpuid;
+
+    vaff &= MPIDR_HWID_MASK;
+
+    vcpuid = MPIDR_AFFINITY_LEVEL(vaff, 0);
+    vcpuid |= MPIDR_AFFINITY_LEVEL(vaff, 1) << 4;
+
+    return vcpuid;
+}
+
+static inline register_t vcpuid_to_vaffinity(unsigned int vcpuid)
+{
+    register_t vaff;
+
+    /*
+     * Right now only AFF0 and AFF1 are supported in virtual affinity.
+     * Since only the first 4 bits in AFF0 are used in GICv3, the
+     * available bits are 12 (4+8).
+     */
+    BUILD_BUG_ON(!(MAX_VIRT_CPUS < ((1 << 12))));
+
+    vaff = (vcpuid & 0x0f) << MPIDR_LEVEL_SHIFT(0);
+    vaff |= ((vcpuid >> 4) & MPIDR_LEVEL_MASK) << MPIDR_LEVEL_SHIFT(1);
+
+    return vaff;
+}
+
 #endif /* __ASM_DOMAIN_H__ */
 
 /*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
  2015-06-01 12:56 ` [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest Chen Baozi
  2015-06-01 12:56 ` [PATCH V6 02/10] xen/arm: Add functions of mapping between vCPUID and virtual affinity Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 15:56   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi Chen Baozi
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

There are 3 places to change:

* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
  registers in vGIC
* Find the vCPU from vMPIDR affinity information when booting with vPSCI
  in vGIC
  - Also make the code for PSCI 0.1 use MPIDR-like value as the cpuid.

Signed-off-by: Chen Baozi <baozich@gmail.com>
Reviewed-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/arm/domain.c  | 6 +-----
 xen/arch/arm/vgic-v3.c | 2 +-
 xen/arch/arm/vpsci.c   | 5 +----
 3 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2bde26e..0cf147c 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -501,11 +501,7 @@ int vcpu_initialise(struct vcpu *v)
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
 
-    /*
-     * By default exposes an SMP system with AFF0 set to the VCPU ID
-     * TODO: Handle multi-threading processor and cluster
-     */
-    v->arch.vmpidr = MPIDR_SMP | (v->vcpu_id << MPIDR_AFF0_SHIFT);
+    v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
     v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 540f85f..ef9a71a 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -61,7 +61,7 @@ static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain *d, uint64_t irouter)
     if ( irouter & GICD_IROUTER_SPI_MODE_ANY )
         return d->vcpu[0];
 
-    vcpu_id = irouter & MPIDR_AFF0_MASK;
+    vcpu_id = vaffinity_to_vcpuid(irouter);
     if ( vcpu_id >= d->max_vcpus )
         return NULL;
 
diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c
index 5d899be..aebe1e2 100644
--- a/xen/arch/arm/vpsci.c
+++ b/xen/arch/arm/vpsci.c
@@ -32,10 +32,7 @@ static int do_common_cpu_on(register_t target_cpu, register_t entry_point,
     int is_thumb = entry_point & 1;
     register_t vcpuid;
 
-    if( ver == XEN_PSCI_V_0_2 )
-        vcpuid = (target_cpu & MPIDR_HWID_MASK);
-    else
-        vcpuid = target_cpu;
+    vcpuid = vaffinity_to_vcpuid(target_cpu);
 
     if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
         return PSCI_INVALID_PARAMETERS;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (2 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:05   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask Chen Baozi
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

Use cpumask_t instead of unsigned long which can only express 64 cpus at
the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/vgic-v2.c            | 16 +++++++++++++---
 xen/arch/arm/vgic-v3.c            | 18 ++++++++++++++----
 xen/arch/arm/vgic.c               | 31 ++++++++++++++++++++-----------
 xen/include/asm-arm/gic.h         |  1 +
 xen/include/asm-arm/gic_v3_defs.h |  2 ++
 xen/include/asm-arm/vgic.h        |  2 +-
 6 files changed, 51 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 3be1a51..17a3c9f 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -33,6 +33,15 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 
+static inline void gicv2_sgir_to_cpumask(cpumask_t *cpumask,
+                                         const register_t sgir)
+{
+    unsigned long target_list;
+
+    target_list = ((sgir & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT);
+    bitmap_copy(cpumask_bits(cpumask), &target_list, GICD_SGI_TARGET_BITS);
+}
+
 static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
@@ -201,16 +210,17 @@ static int vgic_v2_to_sgi(struct vcpu *v, register_t sgir)
     int virq;
     int irqmode;
     enum gic_sgi_mode sgi_mode;
-    unsigned long vcpu_mask = 0;
+    cpumask_t vcpu_mask;
 
+    cpumask_clear(&vcpu_mask);
     irqmode = (sgir & GICD_SGI_TARGET_LIST_MASK) >> GICD_SGI_TARGET_LIST_SHIFT;
     virq = (sgir & GICD_SGI_INTID_MASK);
-    vcpu_mask = (sgir & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT;
 
     /* Map GIC sgi value to enum value */
     switch ( irqmode )
     {
     case GICD_SGI_TARGET_LIST_VAL:
+        gicv2_sgir_to_cpumask(&vcpu_mask, sgir);
         sgi_mode = SGI_TARGET_LIST;
         break;
     case GICD_SGI_TARGET_OTHERS_VAL:
@@ -226,7 +236,7 @@ static int vgic_v2_to_sgi(struct vcpu *v, register_t sgir)
         return 0;
     }
 
-    return vgic_to_sgi(v, sgir, sgi_mode, virq, vcpu_mask);
+    return vgic_to_sgi(v, sgir, sgi_mode, virq, &vcpu_mask);
 }
 
 static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index ef9a71a..2bf5294 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -972,22 +972,32 @@ write_ignore:
     return 1;
 }
 
+static inline void gicv3_sgir_to_cpumask(cpumask_t *cpumask,
+                                         const register_t sgir)
+{
+    unsigned long target_list;
+
+    target_list = sgir & ICH_SGI_TARGETLIST_MASK;
+    bitmap_copy(cpumask_bits(cpumask), &target_list, ICH_SGI_TARGET_BITS);
+}
+
 static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
 {
     int virq;
     int irqmode;
     enum gic_sgi_mode sgi_mode;
-    unsigned long vcpu_mask = 0;
+    cpumask_t vcpu_mask;
 
+    cpumask_clear(&vcpu_mask);
     irqmode = (sgir >> ICH_SGI_IRQMODE_SHIFT) & ICH_SGI_IRQMODE_MASK;
     virq = (sgir >> ICH_SGI_IRQ_SHIFT ) & ICH_SGI_IRQ_MASK;
-    /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
-    vcpu_mask = sgir & ICH_SGI_TARGETLIST_MASK;
 
     /* Map GIC sgi value to enum value */
     switch ( irqmode )
     {
     case ICH_SGI_TARGET_LIST:
+        /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
+        gicv3_sgir_to_cpumask(&vcpu_mask, sgir);
         sgi_mode = SGI_TARGET_LIST;
         break;
     case ICH_SGI_TARGET_OTHERS:
@@ -998,7 +1008,7 @@ static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
         return 0;
     }
 
-    return vgic_to_sgi(v, sgir, sgi_mode, virq, vcpu_mask);
+    return vgic_to_sgi(v, sgir, sgi_mode, virq, &vcpu_mask);
 }
 
 static int vgic_v3_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7b387b7..1bd86f8 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -318,15 +318,20 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     }
 }
 
-/* TODO: unsigned long is used to fit vcpu_mask.*/
 int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int virq,
-                unsigned long vcpu_mask)
+                cpumask_t *vcpu_mask)
 {
     struct domain *d = v->domain;
     int vcpuid;
     int i;
 
-    ASSERT(d->max_vcpus < 8*sizeof(vcpu_mask));
+    /*
+     * cpumask_t is based on NR_CPUS and there is no relation between
+     * NR_CPUS and MAX_VIRT_CPUS. Furthermore, NR_CPUS can be configured
+     * at build time by the user. So we add a BUILD_BUG_ON here in order
+     * to avoid insecure hypervisor.
+     */
+    BUILD_BUG_ON(sizeof(cpumask_t)*8 < MAX_VIRT_CPUS);
 
     ASSERT( virq < 16 );
 
@@ -338,25 +343,27 @@ int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int
     case SGI_TARGET_OTHERS:
         /*
          * We expect vcpu_mask to be 0 for SGI_TARGET_OTHERS and
-         * SGI_TARGET_SELF mode. So Force vcpu_mask to 0
+         * SGI_TARGET_SELF mode. Since we have already clear the
+         * cpumask on the caller of this function, we assume it
+         * has been set to 0 here.
          */
         perfc_incr(vgic_sgi_others);
-        vcpu_mask = 0;
         for ( i = 0; i < d->max_vcpus; i++ )
         {
             if ( i != current->vcpu_id && d->vcpu[i] != NULL &&
                  is_vcpu_online(d->vcpu[i]) )
-                set_bit(i, &vcpu_mask);
+                cpumask_set_cpu(i, vcpu_mask);
         }
         break;
     case SGI_TARGET_SELF:
         /*
          * We expect vcpu_mask to be 0 for SGI_TARGET_OTHERS and
-         * SGI_TARGET_SELF mode. So Force vcpu_mask to 0
+         * SGI_TARGET_SELF mode. Since we have already clear the
+         * cpumask on the caller of this function, we assume it
+         * has been set to 0 here.
          */
         perfc_incr(vgic_sgi_self);
-        vcpu_mask = 0;
-        set_bit(current->vcpu_id, &vcpu_mask);
+        cpumask_set_cpu(current->vcpu_id, vcpu_mask);
         break;
     default:
         gprintk(XENLOG_WARNING,
@@ -365,12 +372,14 @@ int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int
         return 0;
     }
 
-    for_each_set_bit( vcpuid, &vcpu_mask, d->max_vcpus )
+    for ( vcpuid = cpumask_first(vcpu_mask);
+          vcpuid < d->max_vcpus;
+          vcpuid = cpumask_next(vcpuid, vcpu_mask))
     {
         if ( d->vcpu[vcpuid] != NULL && !is_vcpu_online(d->vcpu[vcpuid]) )
         {
             gprintk(XENLOG_WARNING, "VGIC: write r=%"PRIregister" \
-                    vcpu_mask=%lx, wrong CPUTargetList\n", sgir, vcpu_mask);
+                    , wrong CPUTargetList\n", sgir);
             continue;
         }
         vgic_vcpu_inject_irq(d->vcpu[vcpuid], virq);
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 463fffb..c6ef4bf 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -64,6 +64,7 @@
 #define GICD_SGI_TARGET_SELF_VAL     (2)
 #define GICD_SGI_TARGET_SHIFT        (16)
 #define GICD_SGI_TARGET_MASK         (0xFFUL<<GICD_SGI_TARGET_SHIFT)
+#define GICD_SGI_TARGET_BITS         (8)
 #define GICD_SGI_GROUP1              (1UL<<15)
 #define GICD_SGI_INTID_MASK          (0xFUL)
 
diff --git a/xen/include/asm-arm/gic_v3_defs.h b/xen/include/asm-arm/gic_v3_defs.h
index 556f114..e106e67 100644
--- a/xen/include/asm-arm/gic_v3_defs.h
+++ b/xen/include/asm-arm/gic_v3_defs.h
@@ -152,6 +152,8 @@
 #define ICH_SGI_IRQ_SHIFT            24
 #define ICH_SGI_IRQ_MASK             0xf
 #define ICH_SGI_TARGETLIST_MASK      0xffff
+#define ICH_SGI_TARGET_BITS          16
+
 #endif /* __ASM_ARM_GIC_V3_DEFS_H__ */
 
 /*
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 6dcdf9f..2f413e1 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -201,7 +201,7 @@ DEFINE_VGIC_OPS(3)
 extern int vcpu_vgic_free(struct vcpu *v);
 extern int vgic_to_sgi(struct vcpu *v, register_t sgir,
                        enum gic_sgi_mode irqmode, int virq,
-                       unsigned long vcpu_mask);
+                       cpumask_t *vcpu_mask);
 extern void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq);
 
 /* Reserve a specific guest vIRQ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (3 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:09   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU Chen Baozi
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

To support more than 16 vCPUs, we have to calculate cpumask with AFF1
field value in ICC_SGI1R_EL1.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/vgic-v3.c            | 30 ++++++++++++++++++++++++++----
 xen/include/asm-arm/gic_v3_defs.h |  2 ++
 2 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 2bf5294..f2b78a4 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -972,13 +972,28 @@ write_ignore:
     return 1;
 }
 
-static inline void gicv3_sgir_to_cpumask(cpumask_t *cpumask,
+static inline int gicv3_sgir_to_cpumask(cpumask_t *cpumask,
                                          const register_t sgir)
 {
     unsigned long target_list;
+    uint16_t *target_bitmap;
+    unsigned int aff1;
 
     target_list = sgir & ICH_SGI_TARGETLIST_MASK;
-    bitmap_copy(cpumask_bits(cpumask), &target_list, ICH_SGI_TARGET_BITS);
+    /* We assume that only AFF1 is used in ICC_SGI1R_EL1. */
+    aff1 = (sgir >> ICH_SGI_AFFINITY_LEVEL(1)) & ICH_SGI_AFFx_MASK;
+
+    /* There might be up to 4096 vCPUs with all bits in affinity 1
+     * are used, so we have to check whether it will overflow the
+     * bitmap array of cpumask_t.
+     */
+    if ( ((aff1 + 1) * ICH_SGI_TARGET_BITS) > NR_CPUS )
+        return 1;
+
+    target_bitmap = (uint16_t *)cpumask_bits(cpumask);
+    target_bitmap[aff1] = target_list;
+
+    return 0;
 }
 
 static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
@@ -996,8 +1011,15 @@ static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
     switch ( irqmode )
     {
     case ICH_SGI_TARGET_LIST:
-        /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
-        gicv3_sgir_to_cpumask(&vcpu_mask, sgir);
+        /*
+         * Currenty we assume only affinity level-1 is used in SGI's
+         * injection, ignoring level 2 & 3.
+         */
+        if ( gicv3_sgir_to_cpumask(&vcpu_mask, sgir) )
+        {
+            gprintk(XENLOG_WARNING, "Wrong affinity in SGI1R_EL register\n");
+            return 0;
+        }
         sgi_mode = SGI_TARGET_LIST;
         break;
     case ICH_SGI_TARGET_OTHERS:
diff --git a/xen/include/asm-arm/gic_v3_defs.h b/xen/include/asm-arm/gic_v3_defs.h
index e106e67..333aa56 100644
--- a/xen/include/asm-arm/gic_v3_defs.h
+++ b/xen/include/asm-arm/gic_v3_defs.h
@@ -153,6 +153,8 @@
 #define ICH_SGI_IRQ_MASK             0xf
 #define ICH_SGI_TARGETLIST_MASK      0xffff
 #define ICH_SGI_TARGET_BITS          16
+#define ICH_SGI_AFFx_MASK            0xff
+#define ICH_SGI_AFFINITY_LEVEL(x)    (16 * (x))
 
 #endif /* __ASM_ARM_GIC_V3_DEFS_H__ */
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (4 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:11   ` Ian Campbell
  2015-06-05 16:12   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 07/10] xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity Chen Baozi
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.

Signed-off-by: Chen Baozi <baozich@gmail.com>
Reviewed-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl_arm.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index c5088c4..16f4158 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -272,6 +272,7 @@ static int make_cpus_node(libxl__gc *gc, void *fdt, int nr_cpus,
                           const struct arch_info *ainfo)
 {
     int res, i;
+    uint64_t mpidr_aff;
 
     res = fdt_begin_node(fdt, "cpus");
     if (res) return res;
@@ -283,7 +284,16 @@ static int make_cpus_node(libxl__gc *gc, void *fdt, int nr_cpus,
     if (res) return res;
 
     for (i = 0; i < nr_cpus; i++) {
-        const char *name = GCSPRINTF("cpu@%d", i);
+        const char *name;
+
+        /*
+         * According to ARM CPUs bindings, the reg field should match
+         * the MPIDR's affinity bits. We will use AFF0 and AFF1 when
+         * constructing the reg value of the guest at the moment, for it
+         * is enough for the current max vcpu number.
+         */
+        mpidr_aff = (i & 0x0f) | (((i >> 4) & 0xff) << 8);
+        name = GCSPRINTF("cpu@%lx", mpidr_aff);
 
         res = fdt_begin_node(fdt, name);
         if (res) return res;
@@ -297,7 +307,7 @@ static int make_cpus_node(libxl__gc *gc, void *fdt, int nr_cpus,
         res = fdt_property_string(fdt, "enable-method", "psci");
         if (res) return res;
 
-        res = fdt_property_regs(gc, fdt, 1, 0, 1, (uint64_t)i);
+        res = fdt_property_regs(gc, fdt, 1, 0, 1, mpidr_aff);
         if (res) return res;
 
         res = fdt_end_node(fdt);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 07/10] xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (5 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:13   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init Chen Baozi
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

According to ARM CPUs bindings, the reg field should match the MPIDR's
affinity bits. We will use AFF0 and AFF1 when constructing the reg value
of the guest at the moment, for it is enough for the current max vcpu
number.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/domain_build.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a156de9..154c9d6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -712,6 +712,7 @@ static int make_cpus_node(const struct domain *d, void *fdt,
     char buf[15];
     u32 clock_frequency;
     bool_t clock_valid;
+    uint64_t mpidr_aff;
 
     DPRINT("Create cpus node\n");
 
@@ -761,9 +762,16 @@ static int make_cpus_node(const struct domain *d, void *fdt,
 
     for ( cpu = 0; cpu < d->max_vcpus; cpu++ )
     {
-        DPRINT("Create cpu@%u node\n", cpu);
+        /*
+         * According to ARM CPUs bindings, the reg field should match
+         * the MPIDR's affinity bits. We will use AFF0 and AFF1 when
+         * constructing the reg value of the guest at the moment, for it
+         * is enough for the current max vcpu number.
+         */
+        mpidr_aff = vcpuid_to_vaffinity(cpu);
+        DPRINT("Create cpu@%lx (logical CPUID: %d) node\n", mpidr_aff, cpu);
 
-        snprintf(buf, sizeof(buf), "cpu@%u", cpu);
+        snprintf(buf, sizeof(buf), "cpu@%lx", mpidr_aff);
         res = fdt_begin_node(fdt, buf);
         if ( res )
             return res;
@@ -776,7 +784,7 @@ static int make_cpus_node(const struct domain *d, void *fdt,
         if ( res )
             return res;
 
-        res = fdt_property_cell(fdt, "reg", cpu);
+        res = fdt_property_cell(fdt, "reg", mpidr_aff);
         if ( res )
             return res;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (6 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 07/10] xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:22   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops Chen Baozi
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Keir Fraser, Ian Campbell, Andrew Cooper, Tim Deegan,
	Julien Grall, Stefano Stabellini, Jan Beulich, Chen Baozi

From: Chen Baozi <baozich@gmail.com>

evtchn_init will call domain_max_vcpus to allocate poll_mask. On
arm/arm64 platform, this number is determined by the vGIC the guest
is going to use, which won't be initialised until arch_domain_create
is called in current implementation. However, moving arch_domain_create
means that we will allocate memory before checking the XSM policy,
which seems not to be acceptable because if the domain is not allowed
to boot by XSM policy the expensive execution of arch_domain_create
is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
initialisation be done earlier.

Signed-off-by: Chen Baozi <baozich@gmail.com>
Cc: Julien Grall <julien.grall@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/arm/domain.c    | 73 +++++++++++++++++++++++++++++++++---------------
 xen/arch/arm/vgic.c      | 14 ----------
 xen/arch/x86/domain.c    |  6 ++++
 xen/common/domain.c      |  3 ++
 xen/include/xen/domain.h |  2 ++
 5 files changed, 61 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 0cf147c..63c34fd 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -527,37 +527,16 @@ void vcpu_destroy(struct vcpu *v)
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
 }
 
-int arch_domain_create(struct domain *d, unsigned int domcr_flags,
-                       struct xen_arch_domainconfig *config)
+int arch_domain_preinit(struct domain *d,
+                        struct xen_arch_domainconfig *config)
 {
     int rc;
 
-    d->arch.relmem = RELMEM_not_started;
-
     /* Idle domains do not need this setup */
     if ( is_idle_domain(d) )
         return 0;
 
     ASSERT(config != NULL);
-    if ( (rc = p2m_init(d)) != 0 )
-        goto fail;
-
-    rc = -ENOMEM;
-    if ( (d->shared_info = alloc_xenheap_pages(0, 0)) == NULL )
-        goto fail;
-
-    /* Default the virtual ID to match the physical */
-    d->arch.vpidr = boot_cpu_data.midr.bits;
-
-    clear_page(d->shared_info);
-    share_xen_page_with_guest(
-        virt_to_page(d->shared_info), d, XENSHARE_writable);
-
-    if ( (rc = domain_io_init(d)) != 0 )
-        goto fail;
-
-    if ( (rc = p2m_alloc_table(d)) != 0 )
-        goto fail;
 
     switch ( config->gic_version )
     {
@@ -567,12 +546,16 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
         case GIC_V2:
             config->gic_version = XEN_DOMCTL_CONFIG_GIC_V2;
             d->arch.vgic.version = GIC_V2;
+            d->arch.vgic.handler = &vgic_v2_ops;
             break;
 
+#ifdef CONFIG_ARM_64
         case GIC_V3:
             config->gic_version = XEN_DOMCTL_CONFIG_GIC_V3;
             d->arch.vgic.version = GIC_V3;
+            d->arch.vgic.handler = &vgic_v3_ops;
             break;
+#endif
 
         default:
             BUG();
@@ -581,17 +564,61 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
 
     case XEN_DOMCTL_CONFIG_GIC_V2:
         d->arch.vgic.version = GIC_V2;
+        d->arch.vgic.handler = &vgic_v2_ops;
         break;
 
+#ifdef CONFIG_ARM_64
     case XEN_DOMCTL_CONFIG_GIC_V3:
         d->arch.vgic.version = GIC_V3;
+        d->arch.vgic.handler = &vgic_v3_ops;
         break;
+#endif
 
     default:
         rc = -EOPNOTSUPP;
         goto fail;
     }
 
+    return 0;
+
+fail:
+    d->is_dying = DOMDYING_dead;
+
+    return rc;
+}
+
+int arch_domain_create(struct domain *d, unsigned int domcr_flags,
+                       struct xen_arch_domainconfig *config)
+{
+    int rc;
+
+    d->arch.relmem = RELMEM_not_started;
+
+    /* Idle domains do not need this setup */
+    if ( is_idle_domain(d) )
+        return 0;
+
+    ASSERT(config != NULL);
+    if ( (rc = p2m_init(d)) != 0 )
+        goto fail;
+
+    rc = -ENOMEM;
+    if ( (d->shared_info = alloc_xenheap_pages(0, 0)) == NULL )
+        goto fail;
+
+    /* Default the virtual ID to match the physical */
+    d->arch.vpidr = boot_cpu_data.midr.bits;
+
+    clear_page(d->shared_info);
+    share_xen_page_with_guest(
+        virt_to_page(d->shared_info), d, XENSHARE_writable);
+
+    if ( (rc = domain_io_init(d)) != 0 )
+        goto fail;
+
+    if ( (rc = p2m_alloc_table(d)) != 0 )
+        goto fail;
+
     if ( (rc = domain_vgic_init(d, config->nr_spis)) != 0 )
         goto fail;
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1bd86f8..08b487b 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -88,20 +88,6 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
         return -ENODEV;
     }
 
-    switch ( d->arch.vgic.version )
-    {
-#ifdef CONFIG_ARM_64
-    case GIC_V3:
-        d->arch.vgic.handler = &vgic_v3_ops;
-        break;
-#endif
-    case GIC_V2:
-        d->arch.vgic.handler = &vgic_v2_ops;
-        break;
-    default:
-        return -ENODEV;
-    }
-
     spin_lock_init(&d->arch.vgic.lock);
 
     d->arch.vgic.shared_irqs =
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 1f1550e..82c4368 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -507,6 +507,12 @@ void vcpu_destroy(struct vcpu *v)
         xfree(v->arch.pv_vcpu.trap_ctxt);
 }
 
+int arch_domain_preinit(struct domain *d,
+                        struct xen_arch_domainconfig *config)
+{
+    return 0;
+}
+
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config)
 {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6803c4d..965a214 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -316,6 +316,9 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
     if ( domcr_flags & DOMCRF_dummy )
         return d;
 
+    if ( (err = arch_domain_preinit(d, config)) != 0 )
+        goto fail;
+
     if ( !is_idle_domain(d) )
     {
         if ( (err = xsm_domain_create(XSM_HOOK, d, ssidref)) != 0 )
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 848db8a..7a3533e 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -56,6 +56,8 @@ void vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int arch_domain_preinit(struct domain *d,
+                        struct xen_arch_domainconfig *config);
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (7 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:26   ` Ian Campbell
  2015-06-01 12:56 ` [PATCH V6 10/10] xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64 Chen Baozi
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

When a guest uses vGICv2, the maximum number of vCPU it can support
should not be as many as MAX_VIRT_CPUS, which will be more than 8
when GICv3 is used on arm64. So the domain_max_vcpus should return
the value according to the vGIC the domain uses.

We didn't keep it as the old static inline form because it will break
compilation when access the member of struct domain:

In file included from xen/include/xen/domain.h:6:0,
                 from xen/include/xen/sched.h:10,
                 from arm64/asm-offsets.c:10:
xen/include/asm/domain.h: In function ‘domain_max_vcpus’:
xen/include/asm/domain.h:266:10: error: dereferencing pointer to incomplete type
     if (d->arch.vgic.version == GIC_V2)
          ^

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/domain.c        | 6 ++++++
 xen/arch/arm/vgic-v2.c       | 3 +++
 xen/arch/arm/vgic-v3.c       | 7 +++++++
 xen/include/asm-arm/domain.h | 5 +----
 xen/include/asm-arm/vgic.h   | 2 ++
 5 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 63c34fd..1992717 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -917,6 +917,12 @@ void vcpu_block_unless_event_pending(struct vcpu *v)
         vcpu_unblock(current);
 }
 
+unsigned int domain_max_vcpus(const struct domain *d)
+{
+    return min_t(unsigned int, MAX_VIRT_CPUS,
+                 d->arch.vgic.handler->max_vcpus);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 17a3c9f..09e6b5a 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -33,6 +33,8 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 
+#define GICV2_MAX_CPUS  8
+
 static inline void gicv2_sgir_to_cpumask(cpumask_t *cpumask,
                                          const register_t sgir)
 {
@@ -594,6 +596,7 @@ const struct vgic_ops vgic_v2_ops = {
     .domain_init = vgic_v2_domain_init,
     .get_irq_priority = vgic_v2_get_irq_priority,
     .get_target_vcpu = vgic_v2_get_target_vcpu,
+    .max_vcpus = GICV2_MAX_CPUS,
 };
 
 /*
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index f2b78a4..50dcfc9 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -32,6 +32,12 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 
+/*
+ * We will use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
+ * that can be supported is up to 4096(256*16) in theory.
+ */
+#define GICV3_MAX_CPUS  4096
+
 /* GICD_PIDRn register values for ARM implementations */
 #define GICV3_GICD_PIDR0  0x92
 #define GICV3_GICD_PIDR1  0xb4
@@ -1234,6 +1240,7 @@ const struct vgic_ops vgic_v3_ops = {
     .get_irq_priority = vgic_v3_get_irq_priority,
     .get_target_vcpu  = vgic_v3_get_target_vcpu,
     .emulate_sysreg  = vgic_v3_emulate_sysreg,
+    .max_vcpus = GICV3_MAX_CPUS,
 };
 
 /*
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index b7b5cd2..b525bec 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -261,10 +261,7 @@ struct arch_vcpu
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
 
-static inline unsigned int domain_max_vcpus(const struct domain *d)
-{
-    return MAX_VIRT_CPUS;
-}
+unsigned int domain_max_vcpus(const struct domain *);
 
 /*
  * Due to the restriction of GICv3, the number of vCPUs in AFF0 is
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 2f413e1..60c6cfd 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -110,6 +110,8 @@ struct vgic_ops {
     struct vcpu *(*get_target_vcpu)(struct vcpu *v, unsigned int irq);
     /* vGIC sysreg emulation */
     int (*emulate_sysreg)(struct cpu_user_regs *regs, union hsr hsr);
+    /* Maximum number of vCPU supported */
+    const unsigned int max_vcpus;
 };
 
 /* Number of ranks of interrupt registers for a domain */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V6 10/10] xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (8 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops Chen Baozi
@ 2015-06-01 12:56 ` Chen Baozi
  2015-06-05 16:27   ` Ian Campbell
  2015-06-05 14:08 ` [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Ian Campbell
  2015-06-05 14:23 ` Ian Campbell
  11 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-01 12:56 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Chen Baozi, Ian Campbell

From: Chen Baozi <baozich@gmail.com>

After we have increased the size of GICR in address space for guest
and made use of both AFF0 and AFF1 in (v)MPIDR, we are now able to
support up to 4096 vCPUs in theory. However, it will cost 512M
address space for GICR region, which is not necessary big at the
moment. Considering the max CPU number that GIC-500 can support and
the old value of MAX_VIRT_CPUS before commit aa25a61, we increase
its value to 128.

Since the domain_max_vcpus has been changed to depends on vgic_ops,
we could have done more work in order to drop the definition of
MAX_VIRT_CPUS. However, because it is still used for some conditional
compilation in common code, I think that would be better done in a
seperate cleanup patch series.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/vgic-v3.c       | 1 -
 xen/include/asm-arm/config.h | 4 ++++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 50dcfc9..2be9f81 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -895,7 +895,6 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
         rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
                                 DABT_DOUBLE_WORD);
         if ( rank == NULL ) goto write_ignore;
-        BUG_ON(v->domain->max_vcpus > 8);
         new_irouter = *r;
         vgic_lock_rank(v, rank, flags);
 
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 3b23e05..817c216 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -47,7 +47,11 @@
 #define NR_CPUS 128
 #endif
 
+#ifdef CONFIG_ARM_64
+#define MAX_VIRT_CPUS 128
+#else
 #define MAX_VIRT_CPUS 8
+#endif
 
 #define asmlinkage /* Nothing needed */
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (9 preceding siblings ...)
  2015-06-01 12:56 ` [PATCH V6 10/10] xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64 Chen Baozi
@ 2015-06-05 14:08 ` Ian Campbell
  2015-06-05 14:37   ` Julien Grall
  2015-06-05 14:23 ` Ian Campbell
  11 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 14:08 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
> to the fixed size of redistributor mmio region.

Are we talking about only guests here or are hosts impacted too somehow?

>  Increasing the size
> makes the number expand to 16 because of AFF0 restriction on GICv3.

Can you give a reference for this please? "AFF0" doesn't appear anywhere
in my gic v3 spec, and "4.2.2 Interrupt Routing in GICv3 Systems"
implies 256 processors at each affinity level (which is what I
expected).

> To create a guest up to 128 vCPUs, which is the maxium number that GIC-500
> can support, this patchset uses the AFF1 information to create a mapping
> relation between vCPUID and vMPIDR and deals with the related issues.
> 
> These patches are written based upon Julien's "GICv2 on GICv3" series
> and the IROUTER emulation cleanup patch.
> 
> Changes form V5:
> * Rework gicv3_sgir_to_cpumask in #5
> * Rework #8 to split arch_domain_create into two parts:
>   - arch_domain_preinit to initialise vgic_ops before evtchn_init is
>     called
>   - the rest of logic remains in arch_domain_create
> * Use a field value in struct vgic_ops instead of the function point
>   for max_vcpus.
> * Minor changes according to previous reviews.
> 
> Changes from V4:
> * Split the patch 4/8 of V3 into two part:
>   - Use cpumask_t type for vcpu_mask in vgic_to_sgi.
>   - Use AFF1 when translating ICC_SGI1R_EL1 to cpumask.
> * Use a more efficient algorithm when calculate cpumask.
> * Add a patch to call arch_domain_create before evtchn_init, because
>   evtchn_init needs vgic info which is initialised during
>   acrh_domain_create.
> * Get the max vcpu info from vgic_ops.
> * Minor changes according to previous reviews.
> 
> Changes from V3:
> * Drop the wrong patch that altering domain_max_vcpus to a macro.
> * Change the domain_max_vcpus to return value accodring to the version
>   of the vGIC in used.
> 
> Changes from V2:
> * Reorder the patch which increases MAX_VIRT_CPUS to the last to make
>   this series bisectable.
> * Drop the dynamic re-distributor region allocation patch in tools.
> * Use cpumask_t type instead of unsigned long in vgic_to_sgi and do the
>   translation from GICD_SGIR to vcpu_mask in both vGICv2 and vGICv3.
> * Make domain_max_vcpus be alias of max_vcpus in struct domain
> 
> Changes from V1:
> * Use the way that expanding the GICR address space to support up to 128
>   redistributor in guest memory layout rather than use the dynamic
>   allocation.
> * Add support to include AFF1 information in vMPIDR/logical CPUID.
> 
> Chen Baozi (10):
>   xen/arm: gic-v3: Increase the size of GICR in address space for guest
>   xen/arm: Add functions of mapping between vCPUID and virtual affinity
>   xen/arm: Use the new functions for vCPUID/vaffinity transformation
>   xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
>   xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
>   tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU
>   xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity
>   xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
>   xen/arm: make domain_max_vcpus return value from vgic_ops
>   xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64
> 
>  tools/libxl/libxl_arm.c           | 14 ++++++-
>  xen/arch/arm/domain.c             | 85 ++++++++++++++++++++++++++-------------
>  xen/arch/arm/domain_build.c       | 14 +++++--
>  xen/arch/arm/vgic-v2.c            | 19 +++++++--
>  xen/arch/arm/vgic-v3.c            | 50 ++++++++++++++++++++---
>  xen/arch/arm/vgic.c               | 45 +++++++++------------
>  xen/arch/arm/vpsci.c              |  5 +--
>  xen/arch/x86/domain.c             |  6 +++
>  xen/common/domain.c               |  3 ++
>  xen/include/asm-arm/config.h      |  4 ++
>  xen/include/asm-arm/domain.h      | 42 ++++++++++++++++++-
>  xen/include/asm-arm/gic.h         |  1 +
>  xen/include/asm-arm/gic_v3_defs.h |  4 ++
>  xen/include/asm-arm/vgic.h        |  4 +-
>  xen/include/public/arch-arm.h     |  4 +-
>  xen/include/xen/domain.h          |  2 +
>  16 files changed, 226 insertions(+), 76 deletions(-)
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3
  2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
                   ` (10 preceding siblings ...)
  2015-06-05 14:08 ` [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Ian Campbell
@ 2015-06-05 14:23 ` Ian Campbell
  11 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 14:23 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> To create a guest up to 128 vCPUs, which is the maxium number that
> GIC-500

Is 128 also, as it happens, the limit before cpumask_t needs to be
dynamically allocate? (I've been reading xen/include/xen/cpumask.h)

I'm wondering how much I should concern myself with the uses of cpumasks
on the hot paths I'm seeing in here, e.g. sgi injection...

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3
  2015-06-05 14:08 ` [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Ian Campbell
@ 2015-06-05 14:37   ` Julien Grall
  2015-06-05 15:15     ` Ian Campbell
  0 siblings, 1 reply; 42+ messages in thread
From: Julien Grall @ 2015-06-05 14:37 UTC (permalink / raw)
  To: Ian Campbell, Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On 05/06/15 15:08, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>> From: Chen Baozi <baozich@gmail.com>
>>
>> Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
>> to the fixed size of redistributor mmio region.
> 
> Are we talking about only guests here or are hosts impacted too somehow?

Guest only. The GICv3 driver should already supports 128 CPUs (though
never tested).

>>  Increasing the size
>> makes the number expand to 16 because of AFF0 restriction on GICv3.
> 
> Can you give a reference for this please? "AFF0" doesn't appear anywhere
> in my gic v3 spec, and "4.2.2 Interrupt Routing in GICv3 Systems"
> implies 256 processors at each affinity level (which is what I
> expected).

5.7.29 ICC_SGI0R_EL1, ICC_SGI1R_EL1 and ICC_ASGI1R_EL1:

"Note: this restricts distribution of SGIs to the first 16 processors of
an affinity 1 cluster".

Therefore any CPU using AFF0 >= 16 would never receive SGI.

FWIW, Linux is shouting loud when the AFF0 is too high. Xen does the
same thing in the GICv3 drivers.


Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3
  2015-06-05 14:37   ` Julien Grall
@ 2015-06-05 15:15     ` Ian Campbell
  0 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 15:15 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Chen Baozi, Chen Baozi

On Fri, 2015-06-05 at 15:37 +0100, Julien Grall wrote:
> On 05/06/15 15:08, Ian Campbell wrote:
> > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> >> From: Chen Baozi <baozich@gmail.com>
> >>
> >> Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
> >> to the fixed size of redistributor mmio region.
> > 
> > Are we talking about only guests here or are hosts impacted too somehow?
> 
> Guest only. The GICv3 driver should already supports 128 CPUs (though
> never tested).
> 
> >>  Increasing the size
> >> makes the number expand to 16 because of AFF0 restriction on GICv3.
> > 
> > Can you give a reference for this please? "AFF0" doesn't appear anywhere
> > in my gic v3 spec, and "4.2.2 Interrupt Routing in GICv3 Systems"
> > implies 256 processors at each affinity level (which is what I
> > expected).
> 
> 5.7.29 ICC_SGI0R_EL1, ICC_SGI1R_EL1 and ICC_ASGI1R_EL1:
> 
> "Note: this restricts distribution of SGIs to the first 16 processors of
> an affinity 1 cluster".

Thanks, they really buried the lead on that one, the section introducing
affinity hierarchy doesn't even hint at this limit!

> 
> Therefore any CPU using AFF0 >= 16 would never receive SGI.
> 
> FWIW, Linux is shouting loud when the AFF0 is too high. Xen does the
> same thing in the GICv3 drivers.
> 
> 
> Regards,
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest
  2015-06-01 12:56 ` [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest Chen Baozi
@ 2015-06-05 15:49   ` Ian Campbell
  2015-06-05 16:04     ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 15:49 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> Currently it only supports up to 8 vCPUs. Increase the region to hold
> up to 128 vCPUs, which is the maximum number that GIC-500 supports.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> Reviewed-by: Julien Grall <julien.grall@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I did briefly wonder if we should shoot for the stars here and reserve
space for some enormous set of processors, but I suppose there's no
need.

> ---
>  xen/include/public/arch-arm.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index c029e0f..ec0c261 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -388,8 +388,8 @@ struct xen_arch_domainconfig {
>  #define GUEST_GICV3_RDIST_STRIDE   0x20000ULL
>  #define GUEST_GICV3_RDIST_REGIONS  1
>  
> -#define GUEST_GICV3_GICR0_BASE     0x03020000ULL    /* vCPU0 - vCPU7 */
> -#define GUEST_GICV3_GICR0_SIZE     0x00100000ULL
> +#define GUEST_GICV3_GICR0_BASE     0x03020000ULL    /* vCPU0 - vCPU127 */
> +#define GUEST_GICV3_GICR0_SIZE     0x01000000ULL
>  
>  /*
>   * 16MB == 4096 pages reserved for guest to use as a region to map its

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 02/10] xen/arm: Add functions of mapping between vCPUID and virtual affinity
  2015-06-01 12:56 ` [PATCH V6 02/10] xen/arm: Add functions of mapping between vCPUID and virtual affinity Chen Baozi
@ 2015-06-05 15:54   ` Ian Campbell
  0 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 15:54 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> GICv3 restricts that the maximum number of CPUs in affinity 0 (one
> cluster) is 16.

Please add the reference to why this is.


>  That is to say the upper 4 bits of affinity 0 is unused.
> Current implementation considers that AFF0 is equal to vCPUID, which
> makes all vCPUs in one cluster, limiting its number to 16. If we would
> like to support more than 16 number of vCPU in one guest, we need to
> make use of AFF1. Considering the unused upper 4 bits, we need to create
> a pair of functions mapping the vCPUID and virtual affinity.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> Reviewed-by: Julien Grall <julien.grall@citrix.com>
> ---
>  xen/include/asm-arm/domain.h | 41 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 75b17af..b7b5cd2 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -266,6 +266,47 @@ static inline unsigned int domain_max_vcpus(const struct domain *d)
>      return MAX_VIRT_CPUS;
>  }
>  
> +/*
> + * Due to the restriction of GICv3, the number of vCPUs in AFF0 is
> + * limited to 16,

Actually, maybe this is a better place to say why this is than the
commit message (or do both).

>                    thus only the first 4 bits of AFF0 are legal. We will
> + * use the first 2 affinity levels here, expanding the number of vCPU up
> + * to 4096 (16*256), which is more than 128 PEs that GIC-500 supports.
                                           ^the

(although I'm not sure a reference to GIC-500 here is all that useful)

> + * Since we don't save information of vCPU's topology (affinity) in
> + * vMPIDR at the moment, we map the vcpuid to the vMPIDR linearly.
> + *
> + * XXX: We may have multi-threading or virtual cluster information in
> + * the future.

We may, but I don't think that's worth mentioning here and now.

The code itself looks good, thanks.

> + */
> +static inline unsigned int vaffinity_to_vcpuid(register_t vaff)
> +{
> +    unsigned int vcpuid;
> +
> +    vaff &= MPIDR_HWID_MASK;
> +
> +    vcpuid = MPIDR_AFFINITY_LEVEL(vaff, 0);
> +    vcpuid |= MPIDR_AFFINITY_LEVEL(vaff, 1) << 4;
> +
> +    return vcpuid;
> +}
> +
> +static inline register_t vcpuid_to_vaffinity(unsigned int vcpuid)
> +{
> +    register_t vaff;
> +
> +    /*
> +     * Right now only AFF0 and AFF1 are supported in virtual affinity.
> +     * Since only the first 4 bits in AFF0 are used in GICv3, the
> +     * available bits are 12 (4+8).
> +     */
> +    BUILD_BUG_ON(!(MAX_VIRT_CPUS < ((1 << 12))));
> +
> +    vaff = (vcpuid & 0x0f) << MPIDR_LEVEL_SHIFT(0);
> +    vaff |= ((vcpuid >> 4) & MPIDR_LEVEL_MASK) << MPIDR_LEVEL_SHIFT(1);
> +
> +    return vaff;
> +}
> +
>  #endif /* __ASM_DOMAIN_H__ */
>  
>  /*

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation
  2015-06-01 12:56 ` [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation Chen Baozi
@ 2015-06-05 15:56   ` Ian Campbell
  2015-06-05 18:18     ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 15:56 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> There are 3 places to change:
> 
> * Initialise vMPIDR value in vcpu_initialise()
> * Find the vCPU from vMPIDR affinity information when accessing GICD
>   registers in vGIC
> * Find the vCPU from vMPIDR affinity information when booting with vPSCI
>   in vGIC
>   - Also make the code for PSCI 0.1 use MPIDR-like value as the cpuid.

Does this "- Also ..." not need to be done at the same time as the
change to how we describe things in the FDT? Since that is where the
guest gets the parameter from, isn't it?

> Signed-off-by: Chen Baozi <baozich@gmail.com>
> Reviewed-by: Julien Grall <julien.grall@citrix.com>
> ---
>  xen/arch/arm/domain.c  | 6 +-----
>  xen/arch/arm/vgic-v3.c | 2 +-
>  xen/arch/arm/vpsci.c   | 5 +----
>  3 files changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 2bde26e..0cf147c 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -501,11 +501,7 @@ int vcpu_initialise(struct vcpu *v)
>  
>      v->arch.sctlr = SCTLR_GUEST_INIT;
>  
> -    /*
> -     * By default exposes an SMP system with AFF0 set to the VCPU ID
> -     * TODO: Handle multi-threading processor and cluster
> -     */
> -    v->arch.vmpidr = MPIDR_SMP | (v->vcpu_id << MPIDR_AFF0_SHIFT);
> +    v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>  
>      v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
>  
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 540f85f..ef9a71a 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -61,7 +61,7 @@ static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain *d, uint64_t irouter)
>      if ( irouter & GICD_IROUTER_SPI_MODE_ANY )
>          return d->vcpu[0];
>  
> -    vcpu_id = irouter & MPIDR_AFF0_MASK;
> +    vcpu_id = vaffinity_to_vcpuid(irouter);
>      if ( vcpu_id >= d->max_vcpus )
>          return NULL;
>  
> diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c
> index 5d899be..aebe1e2 100644
> --- a/xen/arch/arm/vpsci.c
> +++ b/xen/arch/arm/vpsci.c
> @@ -32,10 +32,7 @@ static int do_common_cpu_on(register_t target_cpu, register_t entry_point,
>      int is_thumb = entry_point & 1;
>      register_t vcpuid;
>  
> -    if( ver == XEN_PSCI_V_0_2 )
> -        vcpuid = (target_cpu & MPIDR_HWID_MASK);
> -    else
> -        vcpuid = target_cpu;
> +    vcpuid = vaffinity_to_vcpuid(target_cpu);
>  
>      if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
>          return PSCI_INVALID_PARAMETERS;

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest
  2015-06-05 15:49   ` Ian Campbell
@ 2015-06-05 16:04     ` Julien Grall
  2015-06-05 16:31       ` Ian Campbell
  0 siblings, 1 reply; 42+ messages in thread
From: Julien Grall @ 2015-06-05 16:04 UTC (permalink / raw)
  To: Ian Campbell, Chen Baozi; +Cc: xen-devel, Chen Baozi

On 05/06/15 16:49, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>> From: Chen Baozi <baozich@gmail.com>
>>
>> Currently it only supports up to 8 vCPUs. Increase the region to hold
>> up to 128 vCPUs, which is the maximum number that GIC-500 supports.
>>
>> Signed-off-by: Chen Baozi <baozich@gmail.com>
>> Reviewed-by: Julien Grall <julien.grall@citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I did briefly wonder if we should shoot for the stars here and reserve
> space for some enormous set of processors, but I suppose there's no
> need.

I though about the same things. AFF0 + AFF1 gives 4096 CPUs.

Although as we will support only 128 vCPUs (see the last patch),
reserving more space is not necessary. This space saved can be used for
a bigger PCI MMIO region later.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
  2015-06-01 12:56 ` [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi Chen Baozi
@ 2015-06-05 16:05   ` Ian Campbell
  2015-06-10 10:21     ` Chen Baozi
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:05 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> Use cpumask_t instead of unsigned long which can only express 64 cpus at
> the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
> to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/vgic-v2.c            | 16 +++++++++++++---
>  xen/arch/arm/vgic-v3.c            | 18 ++++++++++++++----
>  xen/arch/arm/vgic.c               | 31 ++++++++++++++++++++-----------
>  xen/include/asm-arm/gic.h         |  1 +
>  xen/include/asm-arm/gic_v3_defs.h |  2 ++
>  xen/include/asm-arm/vgic.h        |  2 +-
>  6 files changed, 51 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 3be1a51..17a3c9f 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -33,6 +33,15 @@
>  #include <asm/gic.h>
>  #include <asm/vgic.h>
>  
> +static inline void gicv2_sgir_to_cpumask(cpumask_t *cpumask,
> +                                         const register_t sgir)
> +{
> +    unsigned long target_list;
> +
> +    target_list = ((sgir & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT);
> +    bitmap_copy(cpumask_bits(cpumask), &target_list, GICD_SGI_TARGET_BITS);
> +}
> +
>  static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
> @@ -201,16 +210,17 @@ static int vgic_v2_to_sgi(struct vcpu *v, register_t sgir)
>      int virq;
>      int irqmode;
>      enum gic_sgi_mode sgi_mode;
> -    unsigned long vcpu_mask = 0;
> +    cpumask_t vcpu_mask;

How big is NR_CPUS (and hence this variable) going to be at the end of
this series? I can't seem to find the patch which bumps it, so I suppose
it remains 128?

Perhaps we want to bite the bullet now and change the vgic_to_sgi to
take an affinity path thing (aff3.aff2.aff1) + target list, instead of a
cpumask? That makes sense given the 16 CPU per AFF0 limitation, since
there is only a limited set of cpumask patterns which can be specified,
so we don't need the fully arbitrary bitmap.

> +    cpumask_clear(&vcpu_mask);
>      irqmode = (sgir & GICD_SGI_TARGET_LIST_MASK) >> GICD_SGI_TARGET_LIST_SHIFT;
>      virq = (sgir & GICD_SGI_INTID_MASK);
> -    vcpu_mask = (sgir & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT;
>  
>      /* Map GIC sgi value to enum value */
>      switch ( irqmode )
>      {
>      case GICD_SGI_TARGET_LIST_VAL:
> +        gicv2_sgir_to_cpumask(&vcpu_mask, sgir);
>          sgi_mode = SGI_TARGET_LIST;
>          break;
>      case GICD_SGI_TARGET_OTHERS_VAL:
> @@ -226,7 +236,7 @@ static int vgic_v2_to_sgi(struct vcpu *v, register_t sgir)
>          return 0;
>      }
>  
> -    return vgic_to_sgi(v, sgir, sgi_mode, virq, vcpu_mask);
> +    return vgic_to_sgi(v, sgir, sgi_mode, virq, &vcpu_mask);
>  }
>  
>  static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index ef9a71a..2bf5294 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -972,22 +972,32 @@ write_ignore:
>      return 1;
>  }
>  
> +static inline void gicv3_sgir_to_cpumask(cpumask_t *cpumask,
> +                                         const register_t sgir)
> +{
> +    unsigned long target_list;
> +
> +    target_list = sgir & ICH_SGI_TARGETLIST_MASK;
> +    bitmap_copy(cpumask_bits(cpumask), &target_list, ICH_SGI_TARGET_BITS);

Do you not need to left shift the target_list bits into the cpumask
based on AFF1+?

Otherwise aren't you delivering an SGI targeted at, say, 0.0.1.0xab to
0.0.0.0xab?

> +}
> +
>  static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
>  {
>      int virq;
>      int irqmode;
>      enum gic_sgi_mode sgi_mode;
> -    unsigned long vcpu_mask = 0;
> +    cpumask_t vcpu_mask;
>  
> +    cpumask_clear(&vcpu_mask);
>      irqmode = (sgir >> ICH_SGI_IRQMODE_SHIFT) & ICH_SGI_IRQMODE_MASK;
>      virq = (sgir >> ICH_SGI_IRQ_SHIFT ) & ICH_SGI_IRQ_MASK;
> -    /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
> -    vcpu_mask = sgir & ICH_SGI_TARGETLIST_MASK;
>  
>      /* Map GIC sgi value to enum value */
>      switch ( irqmode )
>      {
>      case ICH_SGI_TARGET_LIST:
> +        /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
> +        gicv3_sgir_to_cpumask(&vcpu_mask, sgir);
>          sgi_mode = SGI_TARGET_LIST;
>          break;
>      case ICH_SGI_TARGET_OTHERS:
> @@ -998,7 +1008,7 @@ static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
>          return 0;
>      }
>  
> -    return vgic_to_sgi(v, sgir, sgi_mode, virq, vcpu_mask);
> +    return vgic_to_sgi(v, sgir, sgi_mode, virq, &vcpu_mask);
>  }
>  
>  static int vgic_v3_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 7b387b7..1bd86f8 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -318,15 +318,20 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>      }
>  }
>  
> -/* TODO: unsigned long is used to fit vcpu_mask.*/
>  int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int virq,
> -                unsigned long vcpu_mask)
> +                cpumask_t *vcpu_mask)
>  {
>      struct domain *d = v->domain;
>      int vcpuid;
>      int i;
>  
> -    ASSERT(d->max_vcpus < 8*sizeof(vcpu_mask));
> +    /*
> +     * cpumask_t is based on NR_CPUS and there is no relation between
> +     * NR_CPUS and MAX_VIRT_CPUS. Furthermore, NR_CPUS can be configured
> +     * at build time by the user. So we add a BUILD_BUG_ON here in order
> +     * to avoid insecure hypervisor.

Insecure?

> +     */
> +    BUILD_BUG_ON(sizeof(cpumask_t)*8 < MAX_VIRT_CPUS);

Is this the same as (NR_CPUS < MAX_VIRT_CPUS)?

Ian

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
  2015-06-01 12:56 ` [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask Chen Baozi
@ 2015-06-05 16:09   ` Ian Campbell
  2015-06-05 18:25     ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:09 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> To support more than 16 vCPUs, we have to calculate cpumask with AFF1
> field value in ICC_SGI1R_EL1.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/vgic-v3.c            | 30 ++++++++++++++++++++++++++----
>  xen/include/asm-arm/gic_v3_defs.h |  2 ++
>  2 files changed, 28 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 2bf5294..f2b78a4 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -972,13 +972,28 @@ write_ignore:
>      return 1;
>  }
>  
> -static inline void gicv3_sgir_to_cpumask(cpumask_t *cpumask,
> +static inline int gicv3_sgir_to_cpumask(cpumask_t *cpumask,
>                                           const register_t sgir)
>  {
>      unsigned long target_list;
> +    uint16_t *target_bitmap;
> +    unsigned int aff1;
>  
>      target_list = sgir & ICH_SGI_TARGETLIST_MASK;
> -    bitmap_copy(cpumask_bits(cpumask), &target_list, ICH_SGI_TARGET_BITS);
> +    /* We assume that only AFF1 is used in ICC_SGI1R_EL1. */
> +    aff1 = (sgir >> ICH_SGI_AFFINITY_LEVEL(1)) & ICH_SGI_AFFx_MASK;
> +
> +    /* There might be up to 4096 vCPUs with all bits in affinity 1
> +     * are used, so we have to check whether it will overflow the
> +     * bitmap array of cpumask_t.
> +     */
> +    if ( ((aff1 + 1) * ICH_SGI_TARGET_BITS) > NR_CPUS )
> +        return 1;
> +
> +    target_bitmap = (uint16_t *)cpumask_bits(cpumask);
> +    target_bitmap[aff1] = target_list;

I think this is another argument for passing the cluster and target list
separately at the affinity level.

> +
> +    return 0;
>  }
>  
>  static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
> @@ -996,8 +1011,15 @@ static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
>      switch ( irqmode )
>      {
>      case ICH_SGI_TARGET_LIST:
> -        /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
> -        gicv3_sgir_to_cpumask(&vcpu_mask, sgir);
> +        /*
> +         * Currenty we assume only affinity level-1 is used in SGI's

"Currently"

> +         * injection, ignoring level 2 & 3.
> +         */
> +        if ( gicv3_sgir_to_cpumask(&vcpu_mask, sgir) )
> +        {
> +            gprintk(XENLOG_WARNING, "Wrong affinity in SGI1R_EL register\n");

I don't think we need to log this. The guest has asked to send an SGI to
a VCPU which we know can't possibly exist. I'm not sure what real h/w
would do, but if it is e.g. UNPREDICTABLE then we should consider
killing the guest here. I suspect it's actually just ignored, in which
case we can silently do the same.

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU
  2015-06-01 12:56 ` [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU Chen Baozi
@ 2015-06-05 16:11   ` Ian Campbell
  2015-06-05 16:12   ` Ian Campbell
  1 sibling, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:11 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> According to ARM CPUs bindings, the reg field should match the MPIDR's
> affinity bits. We will use AFF0 and AFF1 when constructing the reg value
> of the guest at the moment, for it is enough for the current max vcpu
> number.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> Reviewed-by: Julien Grall <julien.grall@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU
  2015-06-01 12:56 ` [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU Chen Baozi
  2015-06-05 16:11   ` Ian Campbell
@ 2015-06-05 16:12   ` Ian Campbell
  1 sibling, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:12 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> According to ARM CPUs bindings, the reg field should match the MPIDR's
> affinity bits. We will use AFF0 and AFF1 when constructing the reg value
> of the guest at the moment, for it is enough for the current max vcpu
> number.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> Reviewed-by: Julien Grall <julien.grall@citrix.com>

Actually, please ignore previous ack.
[...]
> +    uint64_t mpidr_aff;
[...]
> +        name = GCSPRINTF("cpu@%lx", mpidr_aff);

The correct format specifier for a uint64_t is "%"PRIx64", otherwise you
will break 32 bit build.

With that changed you can put the ack back...

Ian.

>  
>          res = fdt_begin_node(fdt, name);
>          if (res) return res;
> @@ -297,7 +307,7 @@ static int make_cpus_node(libxl__gc *gc, void *fdt, int nr_cpus,
>          res = fdt_property_string(fdt, "enable-method", "psci");
>          if (res) return res;
>  
> -        res = fdt_property_regs(gc, fdt, 1, 0, 1, (uint64_t)i);
> +        res = fdt_property_regs(gc, fdt, 1, 0, 1, mpidr_aff);
>          if (res) return res;
>  
>          res = fdt_end_node(fdt);

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 07/10] xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity
  2015-06-01 12:56 ` [PATCH V6 07/10] xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity Chen Baozi
@ 2015-06-05 16:13   ` Ian Campbell
  0 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:13 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> +        mpidr_aff = vcpuid_to_vaffinity(cpu);
> +        DPRINT("Create cpu@%lx (logical CPUID: %d) node\n", mpidr_aff, cpu);

"PRIx64" again please. I think the hex vs. decimal here is to be
expected and ok by the way.

With that fixed: Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-01 12:56 ` [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init Chen Baozi
@ 2015-06-05 16:22   ` Ian Campbell
  2015-06-11  9:20     ` Chen Baozi
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:22 UTC (permalink / raw)
  To: Chen Baozi
  Cc: Keir Fraser, Andrew Cooper, Tim Deegan, Julien Grall,
	Stefano Stabellini, Jan Beulich, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> evtchn_init will call domain_max_vcpus to allocate poll_mask. On
> arm/arm64 platform, this number is determined by the vGIC the guest
> is going to use, which won't be initialised until arch_domain_create
> is called in current implementation. However, moving arch_domain_create
> means that we will allocate memory before checking the XSM policy,
> which seems not to be acceptable because if the domain is not allowed
> to boot by XSM policy the expensive execution of arch_domain_create
> is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
> initialisation be done earlier.

I don't have a fundamental objection to this refactoring, but I'm
curious under what circumstances something would belong in preinit
rather than create, i.e. what is the expected semantics of this new hook
vs the old one.

If you could arrange that only preinit needed the config pointer that
might provide a suitably narrow definition. AFAIK that would just mean
moving the setting of arch.vgic.nr_spis.

I also don't have a fundamental objection to receiving a 16 byte poll
mask on gic v2 systems, x86 copes with a similar sized one even on small
systems. 

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops
  2015-06-01 12:56 ` [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops Chen Baozi
@ 2015-06-05 16:26   ` Ian Campbell
  2015-06-05 16:39     ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:26 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> [...] 
> +#define GICV2_MAX_CPUS  8

This and GICV3_MAX_CPUS don't seem very worthwhile, unless there are to
be other uses of them.

In fact, GICV3_MAX_CPUS is really MAX_VIRT_CPUS, through it's
association with the affinity mapping, i.e. if one changes so would the
other, in lockstep. So I think you should just use that for v3 and
hardcode 8 inline for v2 (since it cannot change).

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 10/10] xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64
  2015-06-01 12:56 ` [PATCH V6 10/10] xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64 Chen Baozi
@ 2015-06-05 16:27   ` Ian Campbell
  0 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:27 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@gmail.com>
> 
> After we have increased the size of GICR in address space for guest
> and made use of both AFF0 and AFF1 in (v)MPIDR, we are now able to
> support up to 4096 vCPUs in theory. However, it will cost 512M
> address space for GICR region, which is not necessary big at the
> moment.

"which is unnecessarily big at the moment"

>  Considering the max CPU number that GIC-500 can support and
> the old value of MAX_VIRT_CPUS before commit aa25a61, we increase
> its value to 128.
> 
> Since the domain_max_vcpus has been changed to depends on vgic_ops,

"depend"

> we could have done more work in order to drop the definition of
> MAX_VIRT_CPUS. However, because it is still used for some conditional
> compilation in common code, I think that would be better done in a
> seperate cleanup patch series.

"separate"

(although, I think you could just omit the last paragraph from the
formal commit log and move it below the ---).

> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/vgic-v3.c       | 1 -
>  xen/include/asm-arm/config.h | 4 ++++
>  2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 50dcfc9..2be9f81 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -895,7 +895,6 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
>          rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
>                                  DABT_DOUBLE_WORD);
>          if ( rank == NULL ) goto write_ignore;
> -        BUG_ON(v->domain->max_vcpus > 8);
>          new_irouter = *r;
>          vgic_lock_rank(v, rank, flags);
>  
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 3b23e05..817c216 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -47,7 +47,11 @@
>  #define NR_CPUS 128
>  #endif
>  
> +#ifdef CONFIG_ARM_64
> +#define MAX_VIRT_CPUS 128
> +#else
>  #define MAX_VIRT_CPUS 8
> +#endif
>  
>  #define asmlinkage /* Nothing needed */
>  

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest
  2015-06-05 16:04     ` Julien Grall
@ 2015-06-05 16:31       ` Ian Campbell
  2015-06-05 18:07         ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-05 16:31 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Chen Baozi, Chen Baozi

On Fri, 2015-06-05 at 17:04 +0100, Julien Grall wrote:
> On 05/06/15 16:49, Ian Campbell wrote:
> > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> >> From: Chen Baozi <baozich@gmail.com>
> >>
> >> Currently it only supports up to 8 vCPUs. Increase the region to hold
> >> up to 128 vCPUs, which is the maximum number that GIC-500 supports.
> >>
> >> Signed-off-by: Chen Baozi <baozich@gmail.com>
> >> Reviewed-by: Julien Grall <julien.grall@citrix.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > I did briefly wonder if we should shoot for the stars here and reserve
> > space for some enormous set of processors, but I suppose there's no
> > need.
> 
> I though about the same things. AFF0 + AFF1 gives 4096 CPUs.
> 
> Although as we will support only 128 vCPUs (see the last patch),
> reserving more space is not necessary. This space saved can be used for
> a bigger PCI MMIO region later.

I don't think rdistr regions need to be contiguous, so we could consider
(not now, when it happens) putting CPUs 128+ into the gap above 4G, on
the basis that a guest with that many CPUs is almost certainly going to
have tonnes of RAM and therefore be 64 bit...

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops
  2015-06-05 16:26   ` Ian Campbell
@ 2015-06-05 16:39     ` Julien Grall
  0 siblings, 0 replies; 42+ messages in thread
From: Julien Grall @ 2015-06-05 16:39 UTC (permalink / raw)
  To: Ian Campbell, Chen Baozi; +Cc: xen-devel, Chen Baozi

On 05/06/15 17:26, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>> [...] 
>> +#define GICV2_MAX_CPUS  8
> 
> This and GICV3_MAX_CPUS don't seem very worthwhile, unless there are to
> be other uses of them.
> 
> In fact, GICV3_MAX_CPUS is really MAX_VIRT_CPUS, through it's
> association with the affinity mapping, i.e. if one changes so would the
> other, in lockstep. So I think you should just use that for v3 and
> hardcode 8 inline for v2 (since it cannot change).

Technically the vGICv3 driver is supporting 4096 CPUs and the
restriction is only because of other part of the Xen.

This restriction may be removed by passing AFF1 and the target directly
to the vgic_send_sgi as you suggested ealier.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest
  2015-06-05 16:31       ` Ian Campbell
@ 2015-06-05 18:07         ` Julien Grall
  0 siblings, 0 replies; 42+ messages in thread
From: Julien Grall @ 2015-06-05 18:07 UTC (permalink / raw)
  To: Ian Campbell, Julien Grall; +Cc: xen-devel, Chen Baozi, Chen Baozi



On 05/06/2015 17:31, Ian Campbell wrote:
> On Fri, 2015-06-05 at 17:04 +0100, Julien Grall wrote:
>> On 05/06/15 16:49, Ian Campbell wrote:
>>> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>>>> From: Chen Baozi <baozich@gmail.com>
>>>>
>>>> Currently it only supports up to 8 vCPUs. Increase the region to hold
>>>> up to 128 vCPUs, which is the maximum number that GIC-500 supports.
>>>>
>>>> Signed-off-by: Chen Baozi <baozich@gmail.com>
>>>> Reviewed-by: Julien Grall <julien.grall@citrix.com>
>>>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> I did briefly wonder if we should shoot for the stars here and reserve
>>> space for some enormous set of processors, but I suppose there's no
>>> need.
>>
>> I though about the same things. AFF0 + AFF1 gives 4096 CPUs.
>>
>> Although as we will support only 128 vCPUs (see the last patch),
>> reserving more space is not necessary. This space saved can be used for
>> a bigger PCI MMIO region later.
>
> I don't think rdistr regions need to be contiguous, so we could consider
> (not now, when it happens) putting CPUs 128+ into the gap above 4G, on
> the basis that a guest with that many CPUs is almost certainly going to
> have tonnes of RAM and therefore be 64 bit...

Right, the vGICv3 is supporting multiple rdist regions.

FWIW, we don't need to make any assumption as the vGICv3 driver is only 
for 64 bits guests. The emulation misses some bits for a full support of 
32 bit domain (such splitting register access and cp regs).

That made me think that I'm allowing a 32 bits domain to be boot with a 
vGICv3. I may want to disable this possibility for now.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation
  2015-06-05 15:56   ` Ian Campbell
@ 2015-06-05 18:18     ` Julien Grall
  2015-06-08 10:05       ` Ian Campbell
  0 siblings, 1 reply; 42+ messages in thread
From: Julien Grall @ 2015-06-05 18:18 UTC (permalink / raw)
  To: Ian Campbell, Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi

On 05/06/2015 16:56, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>> From: Chen Baozi <baozich@gmail.com>
>>
>> There are 3 places to change:
>>
>> * Initialise vMPIDR value in vcpu_initialise()
>> * Find the vCPU from vMPIDR affinity information when accessing GICD
>>    registers in vGIC
>> * Find the vCPU from vMPIDR affinity information when booting with vPSCI
>>    in vGIC
>>    - Also make the code for PSCI 0.1 use MPIDR-like value as the cpuid.
>
> Does this "- Also ..." not need to be done at the same time as the
> change to how we describe things in the FDT? Since that is where the
> guest gets the parameter from, isn't it?

Well, we only support 8 CPUs. So this changes will return the same value 
as before. It may be worth to mention it.

In another side, both PSCI 0.1 and PSCI 0.2 are modified to respect the 
MPIDR like within this patch. The working in the commit message may be 
misleading.

Somehow the code path slightly differ when the PSCI 0.2 for guest has 
been added. The spec says (PSCI 0.1 Section 6.3 (ARM DEN 0022A)):

"Ideally platform discovery mechanism such as firmware tables would be
used by secure firmware to describe the set of valid CPUIDs to the
hypervisor or Rich OS, if the former is not present. The hypervisor in
turn can create and supply virtual discovery mechanisms to its guests.""

I interpreted this as CPUID is equal to the "reg" register in DT (which 
is an MPIDR-like value).

FWIW, this is the interpretation made by Linux too.

Regards,

-- 
-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
  2015-06-05 16:09   ` Ian Campbell
@ 2015-06-05 18:25     ` Julien Grall
  2015-06-08 10:06       ` Ian Campbell
  0 siblings, 1 reply; 42+ messages in thread
From: Julien Grall @ 2015-06-05 18:25 UTC (permalink / raw)
  To: Ian Campbell, Chen Baozi; +Cc: Julien Grall, xen-devel, Chen Baozi



On 05/06/2015 17:09, Ian Campbell wrote:
>> +         * injection, ignoring level 2 & 3.
>> +         */
>> +        if ( gicv3_sgir_to_cpumask(&vcpu_mask, sgir) )
>> +        {
>> +            gprintk(XENLOG_WARNING, "Wrong affinity in SGI1R_EL register\n");
>
> I don't think we need to log this. The guest has asked to send an SGI to
> a VCPU which we know can't possibly exist. I'm not sure what real h/w
> would do, but if it is e.g. UNPREDICTABLE then we should consider
> killing the guest here. I suspect it's actually just ignored, in which
> case we can silently do the same.

 From the spec:

"Note: if a bit is one and the bit does not correspond to a valid target 
processor, the bit must be ignored by the Distributor. In such cases, a 
Distributor may optionally generate an SEI."

The implementation of SEI is implementation defined. I'm not sure what 
would be the right behavior to adopt here.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation
  2015-06-05 18:18     ` Julien Grall
@ 2015-06-08 10:05       ` Ian Campbell
  2015-06-08 13:00         ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-08 10:05 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Chen Baozi, Chen Baozi

On Fri, 2015-06-05 at 19:18 +0100, Julien Grall wrote:
> On 05/06/2015 16:56, Ian Campbell wrote:
> > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> >> From: Chen Baozi <baozich@gmail.com>
> >>
> >> There are 3 places to change:
> >>
> >> * Initialise vMPIDR value in vcpu_initialise()
> >> * Find the vCPU from vMPIDR affinity information when accessing GICD
> >>    registers in vGIC
> >> * Find the vCPU from vMPIDR affinity information when booting with vPSCI
> >>    in vGIC
> >>    - Also make the code for PSCI 0.1 use MPIDR-like value as the cpuid.
> >
> > Does this "- Also ..." not need to be done at the same time as the
> > change to how we describe things in the FDT? Since that is where the
> > guest gets the parameter from, isn't it?
> 
> Well, we only support 8 CPUs.

I was assuming this would change later in the patch. I don't think PSCI
0.1 is limited to gic v2, is it? (If it is then this is worth mentioning
in the commit message too)

>  So this changes will return the same value 
> as before. It may be worth to mention it.
> 
> In another side, both PSCI 0.1 and PSCI 0.2 are modified to respect the 
> MPIDR like within this patch. The working in the commit message may be 
> misleading.

Yes, possibly.

> Somehow the code path slightly differ when the PSCI 0.2 for guest has 
> been added. The spec says (PSCI 0.1 Section 6.3 (ARM DEN 0022A)):
> 
> "Ideally platform discovery mechanism such as firmware tables would be
> used by secure firmware to describe the set of valid CPUIDs to the
> hypervisor or Rich OS, if the former is not present. The hypervisor in
> turn can create and supply virtual discovery mechanisms to its guests.""
> 
> I interpreted this as CPUID is equal to the "reg" register in DT (which 
> is an MPIDR-like value).
> 
> FWIW, this is the interpretation made by Linux too.

Documentation/devicetree/bindings/arm/cpus.txt requires it to be the
MPIDR value on armv7 and v8.

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
  2015-06-05 18:25     ` Julien Grall
@ 2015-06-08 10:06       ` Ian Campbell
  0 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-08 10:06 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Chen Baozi, Chen Baozi

On Fri, 2015-06-05 at 19:25 +0100, Julien Grall wrote:
> 
> On 05/06/2015 17:09, Ian Campbell wrote:
> >> +         * injection, ignoring level 2 & 3.
> >> +         */
> >> +        if ( gicv3_sgir_to_cpumask(&vcpu_mask, sgir) )
> >> +        {
> >> +            gprintk(XENLOG_WARNING, "Wrong affinity in SGI1R_EL register\n");
> >
> > I don't think we need to log this. The guest has asked to send an SGI to
> > a VCPU which we know can't possibly exist. I'm not sure what real h/w
> > would do, but if it is e.g. UNPREDICTABLE then we should consider
> > killing the guest here. I suspect it's actually just ignored, in which
> > case we can silently do the same.
> 
>  From the spec:
> 
> "Note: if a bit is one and the bit does not correspond to a valid target 
> processor, the bit must be ignored by the Distributor. In such cases, a 
> Distributor may optionally generate an SEI."
> 
> The implementation of SEI is implementation defined. I'm not sure what 
> would be the right behavior to adopt here.

In general I'm in favour of being as harsh as allowed by default (i.e.
without a good reason to do otherwise), to avoid guests coming to rely
on lenient behaviour which we might find we want to change in the
future.

Ian

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation
  2015-06-08 10:05       ` Ian Campbell
@ 2015-06-08 13:00         ` Julien Grall
  0 siblings, 0 replies; 42+ messages in thread
From: Julien Grall @ 2015-06-08 13:00 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, Chen Baozi, Chen Baozi

Hi Ian,

On 08/06/2015 11:05, Ian Campbell wrote:
> On Fri, 2015-06-05 at 19:18 +0100, Julien Grall wrote:
>> On 05/06/2015 16:56, Ian Campbell wrote:
>>> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>>>> From: Chen Baozi <baozich@gmail.com>
>>>>
>>>> There are 3 places to change:
>>>>
>>>> * Initialise vMPIDR value in vcpu_initialise()
>>>> * Find the vCPU from vMPIDR affinity information when accessing GICD
>>>>     registers in vGIC
>>>> * Find the vCPU from vMPIDR affinity information when booting with vPSCI
>>>>     in vGIC
>>>>     - Also make the code for PSCI 0.1 use MPIDR-like value as the cpuid.
>>>
>>> Does this "- Also ..." not need to be done at the same time as the
>>> change to how we describe things in the FDT? Since that is where the
>>> guest gets the parameter from, isn't it?
>>
>> Well, we only support 8 CPUs.
>
> I was assuming this would change later in the patch. I don't think PSCI
> 0.1 is limited to gic v2, is it? (If it is then this is worth mentioning
> in the commit message too)

It's not limited to GICv2. But I wasn't able to find the definition of 
CPUID parameter.

>>   So this changes will return the same value
>> as before. It may be worth to mention it.
>>
>> In another side, both PSCI 0.1 and PSCI 0.2 are modified to respect the
>> MPIDR like within this patch. The working in the commit message may be
>> misleading.
>
> Yes, possibly.
>
>> Somehow the code path slightly differ when the PSCI 0.2 for guest has
>> been added. The spec says (PSCI 0.1 Section 6.3 (ARM DEN 0022A)):
>>
>> "Ideally platform discovery mechanism such as firmware tables would be
>> used by secure firmware to describe the set of valid CPUIDs to the
>> hypervisor or Rich OS, if the former is not present. The hypervisor in
>> turn can create and supply virtual discovery mechanisms to its guests.""
>>
>> I interpreted this as CPUID is equal to the "reg" register in DT (which
>> is an MPIDR-like value).
>>
>> FWIW, this is the interpretation made by Linux too.
>
> Documentation/devicetree/bindings/arm/cpus.txt requires it to be the
> MPIDR value on armv7 and v8.

Well this document only describes the device tree binding. I don't find 
anything which say that this value should be directly passed to the 
PSCI. Did I miss something?

Not really related to this patch in particular, but the size of "reg" 
may be 2 cells which we currently don't handle.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
  2015-06-05 16:05   ` Ian Campbell
@ 2015-06-10 10:21     ` Chen Baozi
  2015-06-10 10:27       ` Ian Campbell
  0 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-10 10:21 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Julien Grall, xen-devel

On Fri, Jun 05, 2015 at 05:05:29PM +0100, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> > From: Chen Baozi <baozich@gmail.com>
> > 
> > Use cpumask_t instead of unsigned long which can only express 64 cpus at
> > the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
> > to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
> > 
> > Signed-off-by: Chen Baozi <baozich@gmail.com>
> > ---
> >  xen/arch/arm/vgic-v2.c            | 16 +++++++++++++---
> >  xen/arch/arm/vgic-v3.c            | 18 ++++++++++++++----
> >  xen/arch/arm/vgic.c               | 31 ++++++++++++++++++++-----------
> >  xen/include/asm-arm/gic.h         |  1 +
> >  xen/include/asm-arm/gic_v3_defs.h |  2 ++
> >  xen/include/asm-arm/vgic.h        |  2 +-
> >  6 files changed, 51 insertions(+), 19 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> > index 3be1a51..17a3c9f 100644
> > --- a/xen/arch/arm/vgic-v2.c
> > +++ b/xen/arch/arm/vgic-v2.c
> > @@ -33,6 +33,15 @@
> >  #include <asm/gic.h>
> >  #include <asm/vgic.h>
> >  
> > +static inline void gicv2_sgir_to_cpumask(cpumask_t *cpumask,
> > +                                         const register_t sgir)
> > +{
> > +    unsigned long target_list;
> > +
> > +    target_list = ((sgir & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT);
> > +    bitmap_copy(cpumask_bits(cpumask), &target_list, GICD_SGI_TARGET_BITS);
> > +}
> > +
> >  static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
> >  {
> >      struct hsr_dabt dabt = info->dabt;
> > @@ -201,16 +210,17 @@ static int vgic_v2_to_sgi(struct vcpu *v, register_t sgir)
> >      int virq;
> >      int irqmode;
> >      enum gic_sgi_mode sgi_mode;
> > -    unsigned long vcpu_mask = 0;
> > +    cpumask_t vcpu_mask;
> 
> How big is NR_CPUS (and hence this variable) going to be at the end of
> this series? I can't seem to find the patch which bumps it, so I suppose
> it remains 128?

The NR_CPUS is defined as 128 or MAX_PHYS_CPUS in asm-arm/config.h, which
is introduced in the commit 2b36ebd4.

> 
> Perhaps we want to bite the bullet now and change the vgic_to_sgi to
> take an affinity path thing (aff3.aff2.aff1) + target list, instead of a
> cpumask? That makes sense given the 16 CPU per AFF0 limitation, since
> there is only a limited set of cpumask patterns which can be specified,
> so we don't need the fully arbitrary bitmap.

It seems that only GICv3 supports affinity level. And vgic_to_sgi is shared
by both vGICv2 and vGICv3... However, we can make aff3==aff2==aff1==0 and put
the 8-bit GICv2 cpumask in the target list. If this is good for vGICv2, I
have no problem on it.

> 
> > +    cpumask_clear(&vcpu_mask);
> >      irqmode = (sgir & GICD_SGI_TARGET_LIST_MASK) >> GICD_SGI_TARGET_LIST_SHIFT;
> >      virq = (sgir & GICD_SGI_INTID_MASK);
> > -    vcpu_mask = (sgir & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT;
> >  
> >      /* Map GIC sgi value to enum value */
> >      switch ( irqmode )
> >      {
> >      case GICD_SGI_TARGET_LIST_VAL:
> > +        gicv2_sgir_to_cpumask(&vcpu_mask, sgir);
> >          sgi_mode = SGI_TARGET_LIST;
> >          break;
> >      case GICD_SGI_TARGET_OTHERS_VAL:
> > @@ -226,7 +236,7 @@ static int vgic_v2_to_sgi(struct vcpu *v, register_t sgir)
> >          return 0;
> >      }
> >  
> > -    return vgic_to_sgi(v, sgir, sgi_mode, virq, vcpu_mask);
> > +    return vgic_to_sgi(v, sgir, sgi_mode, virq, &vcpu_mask);
> >  }
> >  
> >  static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
> > diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> > index ef9a71a..2bf5294 100644
> > --- a/xen/arch/arm/vgic-v3.c
> > +++ b/xen/arch/arm/vgic-v3.c
> > @@ -972,22 +972,32 @@ write_ignore:
> >      return 1;
> >  }
> >  
> > +static inline void gicv3_sgir_to_cpumask(cpumask_t *cpumask,
> > +                                         const register_t sgir)
> > +{
> > +    unsigned long target_list;
> > +
> > +    target_list = sgir & ICH_SGI_TARGETLIST_MASK;
> > +    bitmap_copy(cpumask_bits(cpumask), &target_list, ICH_SGI_TARGET_BITS);
> 
> Do you not need to left shift the target_list bits into the cpumask
> based on AFF1+?
> 
> Otherwise aren't you delivering an SGI targeted at, say, 0.0.1.0xab to
> 0.0.0.0xab?

That will be supported in the next patch.

> 
> > +}
> > +
> >  static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
> >  {
> >      int virq;
> >      int irqmode;
> >      enum gic_sgi_mode sgi_mode;
> > -    unsigned long vcpu_mask = 0;
> > +    cpumask_t vcpu_mask;
> >  
> > +    cpumask_clear(&vcpu_mask);
> >      irqmode = (sgir >> ICH_SGI_IRQMODE_SHIFT) & ICH_SGI_IRQMODE_MASK;
> >      virq = (sgir >> ICH_SGI_IRQ_SHIFT ) & ICH_SGI_IRQ_MASK;
> > -    /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
> > -    vcpu_mask = sgir & ICH_SGI_TARGETLIST_MASK;
> >  
> >      /* Map GIC sgi value to enum value */
> >      switch ( irqmode )
> >      {
> >      case ICH_SGI_TARGET_LIST:
> > +        /* SGI's are injected at Rdist level 0. ignoring affinity 1, 2, 3 */
> > +        gicv3_sgir_to_cpumask(&vcpu_mask, sgir);
> >          sgi_mode = SGI_TARGET_LIST;
> >          break;
> >      case ICH_SGI_TARGET_OTHERS:
> > @@ -998,7 +1008,7 @@ static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
> >          return 0;
> >      }
> >  
> > -    return vgic_to_sgi(v, sgir, sgi_mode, virq, vcpu_mask);
> > +    return vgic_to_sgi(v, sgir, sgi_mode, virq, &vcpu_mask);
> >  }
> >  
> >  static int vgic_v3_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 7b387b7..1bd86f8 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -318,15 +318,20 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> >      }
> >  }
> >  
> > -/* TODO: unsigned long is used to fit vcpu_mask.*/
> >  int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int virq,
> > -                unsigned long vcpu_mask)
> > +                cpumask_t *vcpu_mask)
> >  {
> >      struct domain *d = v->domain;
> >      int vcpuid;
> >      int i;
> >  
> > -    ASSERT(d->max_vcpus < 8*sizeof(vcpu_mask));
> > +    /*
> > +     * cpumask_t is based on NR_CPUS and there is no relation between
> > +     * NR_CPUS and MAX_VIRT_CPUS. Furthermore, NR_CPUS can be configured
> > +     * at build time by the user. So we add a BUILD_BUG_ON here in order
> > +     * to avoid insecure hypervisor.
> 
> Insecure?
> 
> > +     */
> > +    BUILD_BUG_ON(sizeof(cpumask_t)*8 < MAX_VIRT_CPUS);
> 
> Is this the same as (NR_CPUS < MAX_VIRT_CPUS)?

Oops. (NR_CPUS < MAX_VIRT_CPUS) is better, for there might be unused bits
in cpu_mask_t which makes its size not equal to NR_CPU.

cheers,

Baozi.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi
  2015-06-10 10:21     ` Chen Baozi
@ 2015-06-10 10:27       ` Ian Campbell
  0 siblings, 0 replies; 42+ messages in thread
From: Ian Campbell @ 2015-06-10 10:27 UTC (permalink / raw)
  To: Chen Baozi; +Cc: Julien Grall, xen-devel

On Wed, 2015-06-10 at 18:21 +0800, Chen Baozi wrote:
> > Perhaps we want to bite the bullet now and change the vgic_to_sgi to
> > take an affinity path thing (aff3.aff2.aff1) + target list, instead of a
> > cpumask? That makes sense given the 16 CPU per AFF0 limitation, since
> > there is only a limited set of cpumask patterns which can be specified,
> > so we don't need the fully arbitrary bitmap.
> 
> It seems that only GICv3 supports affinity level. And vgic_to_sgi is shared
> by both vGICv2 and vGICv3... However, we can make aff3==aff2==aff1==0 and put
> the 8-bit GICv2 cpumask in the target list. If this is good for vGICv2, I
> have no problem on it.

I think that makes sense, we can have gicv2 assert that aff3..1 == 0 (or
the corresponding argument which encodes them all).

The alternative would be to refactor somehow such that the generic
interface was suitable for both and the difference becomes internal.

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-05 16:22   ` Ian Campbell
@ 2015-06-11  9:20     ` Chen Baozi
  2015-06-11  9:37       ` Ian Campbell
  0 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-11  9:20 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Andrew Cooper, Tim Deegan, Julien Grall,
	Stefano Stabellini, Jan Beulich, xen-devel

On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> > From: Chen Baozi <baozich@gmail.com>
> > 
> > evtchn_init will call domain_max_vcpus to allocate poll_mask. On
> > arm/arm64 platform, this number is determined by the vGIC the guest
> > is going to use, which won't be initialised until arch_domain_create
> > is called in current implementation. However, moving arch_domain_create
> > means that we will allocate memory before checking the XSM policy,
> > which seems not to be acceptable because if the domain is not allowed
> > to boot by XSM policy the expensive execution of arch_domain_create
> > is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
> > initialisation be done earlier.
> 
> I don't have a fundamental objection to this refactoring, but I'm
> curious under what circumstances something would belong in preinit
> rather than create, i.e. what is the expected semantics of this new hook
> vs the old one.

Hmmm, I didn't think about it at this level, :P. Instead, I just brought
the code which must be executed before evtchn_init in advance without
further consideration on semantics etc...

In fact, the current arch_domain_preinit on arm is all about vgic. I
was about to use the name vgic_preinit, but that would be an arm-specific
name which I thought was not suitable to add in the common codes...

Or will it be clearer to call a subfunction called vgic_preinit in the
arch_domain_preinit (for arm) and put the current main contents of
arch_domain_preinit to vgic_preinit?

Cheers,

Baozi.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-11  9:20     ` Chen Baozi
@ 2015-06-11  9:37       ` Ian Campbell
  2015-06-11 11:16         ` Chen Baozi
  0 siblings, 1 reply; 42+ messages in thread
From: Ian Campbell @ 2015-06-11  9:37 UTC (permalink / raw)
  To: Chen Baozi
  Cc: Keir Fraser, Andrew Cooper, Tim Deegan, Julien Grall,
	Stefano Stabellini, Jan Beulich, xen-devel

On Thu, 2015-06-11 at 17:20 +0800, Chen Baozi wrote:
> On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
> > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> > > From: Chen Baozi <baozich@gmail.com>
> > > 
> > > evtchn_init will call domain_max_vcpus to allocate poll_mask. On
> > > arm/arm64 platform, this number is determined by the vGIC the guest
> > > is going to use, which won't be initialised until arch_domain_create
> > > is called in current implementation. However, moving arch_domain_create
> > > means that we will allocate memory before checking the XSM policy,
> > > which seems not to be acceptable because if the domain is not allowed
> > > to boot by XSM policy the expensive execution of arch_domain_create
> > > is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
> > > initialisation be done earlier.
> > 
> > I don't have a fundamental objection to this refactoring, but I'm
> > curious under what circumstances something would belong in preinit
> > rather than create, i.e. what is the expected semantics of this new hook
> > vs the old one.
> 
> Hmmm, I didn't think about it at this level, :P. Instead, I just brought
> the code which must be executed before evtchn_init in advance without
> further consideration on semantics etc...
> 
> In fact, the current arch_domain_preinit on arm is all about vgic. I
> was about to use the name vgic_preinit, but that would be an arm-specific
> name which I thought was not suitable to add in the common codes...
> 
> Or will it be clearer to call a subfunction called vgic_preinit in the
> arch_domain_preinit (for arm) and put the current main contents of
> arch_domain_preinit to vgic_preinit?

The main question which needs answering is: given some new bit of
functionality which needs initialising when should it be done in preinit
and when should it be in init?

Ian.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-11  9:37       ` Ian Campbell
@ 2015-06-11 11:16         ` Chen Baozi
  2015-06-11 11:47           ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Chen Baozi @ 2015-06-11 11:16 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Keir Fraser, Andrew Cooper, Tim Deegan, Julien Grall,
	Stefano Stabellini, Jan Beulich, xen-devel

On Thu, Jun 11, 2015 at 10:37:05AM +0100, Ian Campbell wrote:
> On Thu, 2015-06-11 at 17:20 +0800, Chen Baozi wrote:
> > On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
> > > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> > > > From: Chen Baozi <baozich@gmail.com>
> > > > 
> > > > evtchn_init will call domain_max_vcpus to allocate poll_mask. On
> > > > arm/arm64 platform, this number is determined by the vGIC the guest
> > > > is going to use, which won't be initialised until arch_domain_create
> > > > is called in current implementation. However, moving arch_domain_create
> > > > means that we will allocate memory before checking the XSM policy,
> > > > which seems not to be acceptable because if the domain is not allowed
> > > > to boot by XSM policy the expensive execution of arch_domain_create
> > > > is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
> > > > initialisation be done earlier.
> > > 
> > > I don't have a fundamental objection to this refactoring, but I'm
> > > curious under what circumstances something would belong in preinit
> > > rather than create, i.e. what is the expected semantics of this new hook
> > > vs the old one.
> > 
> > Hmmm, I didn't think about it at this level, :P. Instead, I just brought
> > the code which must be executed before evtchn_init in advance without
> > further consideration on semantics etc...
> > 
> > In fact, the current arch_domain_preinit on arm is all about vgic. I
> > was about to use the name vgic_preinit, but that would be an arm-specific
> > name which I thought was not suitable to add in the common codes...
> > 
> > Or will it be clearer to call a subfunction called vgic_preinit in the
> > arch_domain_preinit (for arm) and put the current main contents of
> > arch_domain_preinit to vgic_preinit?
> 
> The main question which needs answering is: given some new bit of
> functionality which needs initialising when should it be done in preinit
> and when should it be in init?
> 

The resource that would be used (either directly or indirectly) by
xsm_domain_create, evtchn_init or grant_table_create, should be initialised
in preinit? otherwise, it should be put into init?

I am not sure, for it doesn't look like a(n) good/exact semantic
definition...

Cheers,

Baozi.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-11 11:16         ` Chen Baozi
@ 2015-06-11 11:47           ` Julien Grall
  2015-06-11 12:45             ` Chen Baozi
  0 siblings, 1 reply; 42+ messages in thread
From: Julien Grall @ 2015-06-11 11:47 UTC (permalink / raw)
  To: Chen Baozi, Ian Campbell
  Cc: Keir Fraser, Andrew Cooper, Tim Deegan, Stefano Stabellini,
	Jan Beulich, xen-devel

Hi Chen,

On 11/06/2015 07:16, Chen Baozi wrote:
> On Thu, Jun 11, 2015 at 10:37:05AM +0100, Ian Campbell wrote:
>> On Thu, 2015-06-11 at 17:20 +0800, Chen Baozi wrote:
>>> On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
>>>> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>>>>> From: Chen Baozi <baozich@gmail.com>
>>>>>
>>>>> evtchn_init will call domain_max_vcpus to allocate poll_mask. On
>>>>> arm/arm64 platform, this number is determined by the vGIC the guest
>>>>> is going to use, which won't be initialised until arch_domain_create
>>>>> is called in current implementation. However, moving arch_domain_create
>>>>> means that we will allocate memory before checking the XSM policy,
>>>>> which seems not to be acceptable because if the domain is not allowed
>>>>> to boot by XSM policy the expensive execution of arch_domain_create
>>>>> is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
>>>>> initialisation be done earlier.
>>>>
>>>> I don't have a fundamental objection to this refactoring, but I'm
>>>> curious under what circumstances something would belong in preinit
>>>> rather than create, i.e. what is the expected semantics of this new hook
>>>> vs the old one.
>>>
>>> Hmmm, I didn't think about it at this level, :P. Instead, I just brought
>>> the code which must be executed before evtchn_init in advance without
>>> further consideration on semantics etc...
>>>
>>> In fact, the current arch_domain_preinit on arm is all about vgic. I
>>> was about to use the name vgic_preinit, but that would be an arm-specific
>>> name which I thought was not suitable to add in the common codes...
>>>
>>> Or will it be clearer to call a subfunction called vgic_preinit in the
>>> arch_domain_preinit (for arm) and put the current main contents of
>>> arch_domain_preinit to vgic_preinit?
>>
>> The main question which needs answering is: given some new bit of
>> functionality which needs initialising when should it be done in preinit
>> and when should it be in init?
>>
>
> The resource that would be used (either directly or indirectly) by
> xsm_domain_create, evtchn_init or grant_table_create, should be initialised
> in preinit? otherwise, it should be put into init?
>
> I am not sure, for it doesn't look like a(n) good/exact semantic
> definition...

Rather than splitting arch_domain_init, I was thinking to handle 
domain_max_vcpus differently:

domain_max_vcpus(struct domain *d)
{
    if ( !d->arch.vgic.ops )
      return MAX_VIRT_CPUS;
    else
      return (min(MAX_VIRT_CPUS, d->arch.vgic.ops.max_vcpus));
}

On ARM32, event channel doesn't need to allocate the poll_mask 
(MAX_VIRT_CPUS < BITS_PER_LONG). So no concern.

On ARM64, we have to always allocate the vCPU mask. This wouldn't be so 
bad to allocate more memory (2 unsigned long rather than 1 for GICv2).

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init
  2015-06-11 11:47           ` Julien Grall
@ 2015-06-11 12:45             ` Chen Baozi
  0 siblings, 0 replies; 42+ messages in thread
From: Chen Baozi @ 2015-06-11 12:45 UTC (permalink / raw)
  To: Julien Grall
  Cc: Keir Fraser, Ian Campbell, Andrew Cooper, Tim Deegan,
	Stefano Stabellini, Jan Beulich, xen-devel

Hi Julien,

On Thu, Jun 11, 2015 at 07:47:11AM -0400, Julien Grall wrote:
> Hi Chen,
> 
> On 11/06/2015 07:16, Chen Baozi wrote:
> >On Thu, Jun 11, 2015 at 10:37:05AM +0100, Ian Campbell wrote:
> >>On Thu, 2015-06-11 at 17:20 +0800, Chen Baozi wrote:
> >>>On Fri, Jun 05, 2015 at 05:22:56PM +0100, Ian Campbell wrote:
> >>>>On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> >>>>>From: Chen Baozi <baozich@gmail.com>
> >>>>>
> >>>>>evtchn_init will call domain_max_vcpus to allocate poll_mask. On
> >>>>>arm/arm64 platform, this number is determined by the vGIC the guest
> >>>>>is going to use, which won't be initialised until arch_domain_create
> >>>>>is called in current implementation. However, moving arch_domain_create
> >>>>>means that we will allocate memory before checking the XSM policy,
> >>>>>which seems not to be acceptable because if the domain is not allowed
> >>>>>to boot by XSM policy the expensive execution of arch_domain_create
> >>>>>is wasteful. Thus, we create the arch_domain_preinit to make vgic_ops
> >>>>>initialisation be done earlier.
> >>>>
> >>>>I don't have a fundamental objection to this refactoring, but I'm
> >>>>curious under what circumstances something would belong in preinit
> >>>>rather than create, i.e. what is the expected semantics of this new hook
> >>>>vs the old one.
> >>>
> >>>Hmmm, I didn't think about it at this level, :P. Instead, I just brought
> >>>the code which must be executed before evtchn_init in advance without
> >>>further consideration on semantics etc...
> >>>
> >>>In fact, the current arch_domain_preinit on arm is all about vgic. I
> >>>was about to use the name vgic_preinit, but that would be an arm-specific
> >>>name which I thought was not suitable to add in the common codes...
> >>>
> >>>Or will it be clearer to call a subfunction called vgic_preinit in the
> >>>arch_domain_preinit (for arm) and put the current main contents of
> >>>arch_domain_preinit to vgic_preinit?
> >>
> >>The main question which needs answering is: given some new bit of
> >>functionality which needs initialising when should it be done in preinit
> >>and when should it be in init?
> >>
> >
> >The resource that would be used (either directly or indirectly) by
> >xsm_domain_create, evtchn_init or grant_table_create, should be initialised
> >in preinit? otherwise, it should be put into init?
> >
> >I am not sure, for it doesn't look like a(n) good/exact semantic
> >definition...
> 
> Rather than splitting arch_domain_init, I was thinking to handle
> domain_max_vcpus differently:
> 
> domain_max_vcpus(struct domain *d)
> {
>    if ( !d->arch.vgic.ops )
>      return MAX_VIRT_CPUS;
>    else
>      return (min(MAX_VIRT_CPUS, d->arch.vgic.ops.max_vcpus));
> }
> 
> On ARM32, event channel doesn't need to allocate the poll_mask
> (MAX_VIRT_CPUS < BITS_PER_LONG). So no concern.
> 
> On ARM64, we have to always allocate the vCPU mask. This wouldn't be so bad
> to allocate more memory (2 unsigned long rather than 1 for GICv2).

That looks like the second suggestion in Ian's mail. I'll take it in the
next version.

Thanks.

Baozi.

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2015-06-11 12:44 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-01 12:56 [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Chen Baozi
2015-06-01 12:56 ` [PATCH V6 01/10] xen/arm: gic-v3: Increase the size of GICR in address space for guest Chen Baozi
2015-06-05 15:49   ` Ian Campbell
2015-06-05 16:04     ` Julien Grall
2015-06-05 16:31       ` Ian Campbell
2015-06-05 18:07         ` Julien Grall
2015-06-01 12:56 ` [PATCH V6 02/10] xen/arm: Add functions of mapping between vCPUID and virtual affinity Chen Baozi
2015-06-05 15:54   ` Ian Campbell
2015-06-01 12:56 ` [PATCH V6 03/10] xen/arm: Use the new functions for vCPUID/vaffinity transformation Chen Baozi
2015-06-05 15:56   ` Ian Campbell
2015-06-05 18:18     ` Julien Grall
2015-06-08 10:05       ` Ian Campbell
2015-06-08 13:00         ` Julien Grall
2015-06-01 12:56 ` [PATCH V6 04/10] xen/arm: Use cpumask_t type for vcpu_mask in vgic_to_sgi Chen Baozi
2015-06-05 16:05   ` Ian Campbell
2015-06-10 10:21     ` Chen Baozi
2015-06-10 10:27       ` Ian Campbell
2015-06-01 12:56 ` [PATCH V6 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask Chen Baozi
2015-06-05 16:09   ` Ian Campbell
2015-06-05 18:25     ` Julien Grall
2015-06-08 10:06       ` Ian Campbell
2015-06-01 12:56 ` [PATCH V6 06/10] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU Chen Baozi
2015-06-05 16:11   ` Ian Campbell
2015-06-05 16:12   ` Ian Campbell
2015-06-01 12:56 ` [PATCH V6 07/10] xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity Chen Baozi
2015-06-05 16:13   ` Ian Campbell
2015-06-01 12:56 ` [PATCH V6 08/10] xen: Add arch_domain_preinit to initialise vGIC before evtchn_init Chen Baozi
2015-06-05 16:22   ` Ian Campbell
2015-06-11  9:20     ` Chen Baozi
2015-06-11  9:37       ` Ian Campbell
2015-06-11 11:16         ` Chen Baozi
2015-06-11 11:47           ` Julien Grall
2015-06-11 12:45             ` Chen Baozi
2015-06-01 12:56 ` [PATCH V6 09/10] xen/arm: make domain_max_vcpus return value from vgic_ops Chen Baozi
2015-06-05 16:26   ` Ian Campbell
2015-06-05 16:39     ` Julien Grall
2015-06-01 12:56 ` [PATCH V6 10/10] xen/arm64: increase MAX_VIRT_CPUS to 128 on arm64 Chen Baozi
2015-06-05 16:27   ` Ian Campbell
2015-06-05 14:08 ` [PATCH V6 00/10] Support more than 8 vcpus on arm64 with GICv3 Ian Campbell
2015-06-05 14:37   ` Julien Grall
2015-06-05 15:15     ` Ian Campbell
2015-06-05 14:23 ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.