linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Fix vgic initialization problems
@ 2014-12-13 11:17 Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 1/6] arm/arm64: KVM: vgic: move reset initialization into vgic_init_maps() Christoffer Dall
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

This series fixes problems with initializing the VGIC.

The problem is that we were initializing the state of the VGIC on-demand
partially, and doing some final initializtion at the time when we were
going to run a VCPU for the first time.

This broke migration, because the first-vcpu-run init would overwrite
restored state.

We also cleanup the naming of the init functions and add checks when
creating VCPUs and when injecting IRQs from userspace.

This series invalidates patch 3 "KVM: arm/arm64: check vgic_initialized
before VCPU creation" in Eric Auger's vgic init ioctl series.

Eric's series should be applied after this one, making future ABIs
(IRQFD, VFIO, GICv3, ...) require explicit userspace vgic initialization
instead of this on-demand approach that we now have to maintain for
legacy userspace compatiblity.

I've tested this with 32-bit and 64-bit QEMU and kvmtool.

Changes since v1:
 - Use vgic_initialized in vgic_init instead of open-coded check
 - Check return value from vgic_init in kvm_vgic_inject_irq
 - Added Patch 6

Christoffer Dall (5):
  arm/arm64: KVM: Rename vgic_initialized to vgic_ready
  arm/arm64: KVM: Add (new) vgic_initialized macro
  arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized
  arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs
  arm/arm64: KVM: Require in-kernel vgic for the arch timers

Peter Maydell (1):
  arm/arm64: KVM: vgic: move reset initialization into vgic_init_maps()

 arch/arm/kvm/arm.c           |  24 +++++++---
 include/kvm/arm_arch_timer.h |   2 +-
 include/kvm/arm_vgic.h       |  12 +++--
 virt/kvm/arm/arch_timer.c    |  15 ++++---
 virt/kvm/arm/vgic.c          | 101 +++++++++++++++++++++----------------------
 5 files changed, 88 insertions(+), 66 deletions(-)

-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/6] arm/arm64: KVM: vgic: move reset initialization into vgic_init_maps()
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
@ 2014-12-13 11:17 ` Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 2/6] arm/arm64: KVM: Rename vgic_initialized to vgic_ready Christoffer Dall
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

From: Peter Maydell <peter.maydell@linaro.org>

VGIC initialization currently happens in three phases:
 (1) kvm_vgic_create() (triggered by userspace GIC creation)
 (2) vgic_init_maps() (triggered by userspace GIC register read/write
     requests, or from kvm_vgic_init() if not already run)
 (3) kvm_vgic_init() (triggered by first VM run)

We were doing initialization of some state to correspond with the
state of a freshly-reset GIC in kvm_vgic_init(); this is too late,
since it will overwrite changes made by userspace using the
register access APIs before the VM is run. Move this initialization
earlier, into the vgic_init_maps() phase.

This fixes a bug where QEMU could successfully restore a saved
VM state snapshot into a VM that had already been run, but could
not restore it "from cold" using the -loadvm command line option
(the symptoms being that the restored VM would run but interrupts
were ignored).

Finally rename vgic_init_maps to vgic_init and renamed kvm_vgic_init to
kvm_vgic_map_resources.

  [ This patch is originally written by Peter Maydell, but I have
    modified it somewhat heavily, renaming various bits and moving code
    around.  If something is broken, I am to be blamed. - Christoffer ]

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
This patch was originally named "vgic: move reset initialization into
vgic_init_maps()" but I renamed it slightly to match the other vgic
patches in the kernel.  I also did the additional changes since the
original patch:
 - Renamed kvm_vgic_init to kvm_vgic_map_resources
 - Renamed vgic_init_maps to vgic_init
 - Moved vgic_enable call into existing vcpu loop in vgic_init
 - Moved ITARGETSRn initialization above vcpu loop in vgic_init (the idea
   is to init global state first, then vcpu state).
 - Added comment in kvm_vgic_map_resources

 arch/arm/kvm/arm.c     |  6 ++--
 include/kvm/arm_vgic.h |  4 +--
 virt/kvm/arm/vgic.c    | 77 +++++++++++++++++++++-----------------------------
 3 files changed, 37 insertions(+), 50 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9e193c8..a56cbb5 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -427,11 +427,11 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 	vcpu->arch.has_run_once = true;
 
 	/*
-	 * Initialize the VGIC before running a vcpu the first time on
-	 * this VM.
+	 * Map the VGIC hardware resources before running a vcpu the first
+	 * time on this VM.
 	 */
 	if (unlikely(!vgic_initialized(vcpu->kvm))) {
-		ret = kvm_vgic_init(vcpu->kvm);
+		ret = kvm_vgic_map_resources(vcpu->kvm);
 		if (ret)
 			return ret;
 	}
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 206dcc3..fe9783b 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -274,7 +274,7 @@ struct kvm_exit_mmio;
 #ifdef CONFIG_KVM_ARM_VGIC
 int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
 int kvm_vgic_hyp_init(void);
-int kvm_vgic_init(struct kvm *kvm);
+int kvm_vgic_map_resources(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm);
 void kvm_vgic_destroy(struct kvm *kvm);
 void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
@@ -321,7 +321,7 @@ static inline int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr,
 	return -ENXIO;
 }
 
-static inline int kvm_vgic_init(struct kvm *kvm)
+static inline int kvm_vgic_map_resources(struct kvm *kvm)
 {
 	return 0;
 }
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 3aaca49..146c35f 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -91,6 +91,7 @@
 #define ACCESS_WRITE_VALUE	(3 << 1)
 #define ACCESS_WRITE_MASK(x)	((x) & (3 << 1))
 
+static int vgic_init(struct kvm *kvm);
 static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
 static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
 static void vgic_update_state(struct kvm *kvm);
@@ -1726,39 +1727,14 @@ static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
 
 	int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
 	vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
-	vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
+	vgic_cpu->vgic_irq_lr_map = kmalloc(nr_irqs, GFP_KERNEL);
 
 	if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
 		kvm_vgic_vcpu_destroy(vcpu);
 		return -ENOMEM;
 	}
 
-	return 0;
-}
-
-/**
- * kvm_vgic_vcpu_init - Initialize per-vcpu VGIC state
- * @vcpu: pointer to the vcpu struct
- *
- * Initialize the vgic_cpu struct and vgic_dist struct fields pertaining to
- * this vcpu and enable the VGIC for this VCPU
- */
-static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
-{
-	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
-	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
-	int i;
-
-	for (i = 0; i < dist->nr_irqs; i++) {
-		if (i < VGIC_NR_PPIS)
-			vgic_bitmap_set_irq_val(&dist->irq_enabled,
-						vcpu->vcpu_id, i, 1);
-		if (i < VGIC_NR_PRIVATE_IRQS)
-			vgic_bitmap_set_irq_val(&dist->irq_cfg,
-						vcpu->vcpu_id, i, VGIC_CFG_EDGE);
-
-		vgic_cpu->vgic_irq_lr_map[i] = LR_EMPTY;
-	}
+	memset(vgic_cpu->vgic_irq_lr_map, LR_EMPTY, nr_irqs);
 
 	/*
 	 * Store the number of LRs per vcpu, so we don't have to go
@@ -1767,7 +1743,7 @@ static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	 */
 	vgic_cpu->nr_lr = vgic->nr_lr;
 
-	vgic_enable(vcpu);
+	return 0;
 }
 
 void kvm_vgic_destroy(struct kvm *kvm)
@@ -1804,12 +1780,12 @@ void kvm_vgic_destroy(struct kvm *kvm)
  * Allocate and initialize the various data structures. Must be called
  * with kvm->lock held!
  */
-static int vgic_init_maps(struct kvm *kvm)
+static int vgic_init(struct kvm *kvm)
 {
 	struct vgic_dist *dist = &kvm->arch.vgic;
 	struct kvm_vcpu *vcpu;
 	int nr_cpus, nr_irqs;
-	int ret, i;
+	int ret, i, vcpu_id;
 
 	if (dist->nr_cpus)	/* Already allocated */
 		return 0;
@@ -1859,16 +1835,28 @@ static int vgic_init_maps(struct kvm *kvm)
 	if (ret)
 		goto out;
 
-	kvm_for_each_vcpu(i, vcpu, kvm) {
+	for (i = VGIC_NR_PRIVATE_IRQS; i < dist->nr_irqs; i += 4)
+		vgic_set_target_reg(kvm, 0, i);
+
+	kvm_for_each_vcpu(vcpu_id, vcpu, kvm) {
 		ret = vgic_vcpu_init_maps(vcpu, nr_irqs);
 		if (ret) {
 			kvm_err("VGIC: Failed to allocate vcpu memory\n");
 			break;
 		}
-	}
 
-	for (i = VGIC_NR_PRIVATE_IRQS; i < dist->nr_irqs; i += 4)
-		vgic_set_target_reg(kvm, 0, i);
+		for (i = 0; i < dist->nr_irqs; i++) {
+			if (i < VGIC_NR_PPIS)
+				vgic_bitmap_set_irq_val(&dist->irq_enabled,
+							vcpu->vcpu_id, i, 1);
+			if (i < VGIC_NR_PRIVATE_IRQS)
+				vgic_bitmap_set_irq_val(&dist->irq_cfg,
+							vcpu->vcpu_id, i,
+							VGIC_CFG_EDGE);
+		}
+
+		vgic_enable(vcpu);
+	}
 
 out:
 	if (ret)
@@ -1878,18 +1866,16 @@ out:
 }
 
 /**
- * kvm_vgic_init - Initialize global VGIC state before running any VCPUs
+ * kvm_vgic_map_resources - Configure global VGIC state before running any VCPUs
  * @kvm: pointer to the kvm struct
  *
  * Map the virtual CPU interface into the VM before running any VCPUs.  We
  * can't do this at creation time, because user space must first set the
- * virtual CPU interface address in the guest physical address space.  Also
- * initialize the ITARGETSRn regs to 0 on the emulated distributor.
+ * virtual CPU interface address in the guest physical address space.
  */
-int kvm_vgic_init(struct kvm *kvm)
+int kvm_vgic_map_resources(struct kvm *kvm)
 {
-	struct kvm_vcpu *vcpu;
-	int ret = 0, i;
+	int ret = 0;
 
 	if (!irqchip_in_kernel(kvm))
 		return 0;
@@ -1906,7 +1892,11 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
-	ret = vgic_init_maps(kvm);
+	/*
+	 * Initialize the vgic if this hasn't already been done on demand by
+	 * accessing the vgic state from userspace.
+	 */
+	ret = vgic_init(kvm);
 	if (ret) {
 		kvm_err("Unable to allocate maps\n");
 		goto out;
@@ -1920,9 +1910,6 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
-	kvm_for_each_vcpu(i, vcpu, kvm)
-		kvm_vgic_vcpu_init(vcpu);
-
 	kvm->arch.vgic.ready = true;
 out:
 	if (ret)
@@ -2167,7 +2154,7 @@ static int vgic_attr_regs_access(struct kvm_device *dev,
 
 	mutex_lock(&dev->kvm->lock);
 
-	ret = vgic_init_maps(dev->kvm);
+	ret = vgic_init(dev->kvm);
 	if (ret)
 		goto out;
 
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/6] arm/arm64: KVM: Rename vgic_initialized to vgic_ready
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 1/6] arm/arm64: KVM: vgic: move reset initialization into vgic_init_maps() Christoffer Dall
@ 2014-12-13 11:17 ` Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 3/6] arm/arm64: KVM: Add (new) vgic_initialized macro Christoffer Dall
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

The vgic_initialized() macro currently returns the state of the
vgic->ready flag, which indicates if the vgic is ready to be used when
running a VM, not specifically if its internal state has been
initialized.

Rename the macro accordingly in preparation for a more nuanced
initialization flow.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 arch/arm/kvm/arm.c     | 2 +-
 include/kvm/arm_vgic.h | 4 ++--
 virt/kvm/arm/vgic.c    | 6 +++---
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index a56cbb5..a9d005f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -430,7 +430,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 	 * Map the VGIC hardware resources before running a vcpu the first
 	 * time on this VM.
 	 */
-	if (unlikely(!vgic_initialized(vcpu->kvm))) {
+	if (unlikely(!vgic_ready(vcpu->kvm))) {
 		ret = kvm_vgic_map_resources(vcpu->kvm);
 		if (ret)
 			return ret;
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index fe9783b..3e262b9 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -287,7 +287,7 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		      struct kvm_exit_mmio *mmio);
 
 #define irqchip_in_kernel(k)	(!!((k)->arch.vgic.in_kernel))
-#define vgic_initialized(k)	((k)->arch.vgic.ready)
+#define vgic_ready(k)		((k)->arch.vgic.ready)
 
 int vgic_v2_probe(struct device_node *vgic_node,
 		  const struct vgic_ops **ops,
@@ -369,7 +369,7 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
 	return 0;
 }
 
-static inline bool vgic_initialized(struct kvm *kvm)
+static inline bool vgic_ready(struct kvm *kvm)
 {
 	return true;
 }
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 146c35f..22c8304 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1693,7 +1693,7 @@ out:
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
 			bool level)
 {
-	if (likely(vgic_initialized(kvm)) &&
+	if (likely(vgic_ready(kvm)) &&
 	    vgic_update_irq_pending(kvm, cpuid, irq_num, level))
 		vgic_kick_vcpus(kvm);
 
@@ -1882,7 +1882,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
 
 	mutex_lock(&kvm->lock);
 
-	if (vgic_initialized(kvm))
+	if (vgic_ready(kvm))
 		goto out;
 
 	if (IS_VGIC_ADDR_UNDEF(kvm->arch.vgic.vgic_dist_base) ||
@@ -2276,7 +2276,7 @@ static int vgic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 
 		mutex_lock(&dev->kvm->lock);
 
-		if (vgic_initialized(dev->kvm) || dev->kvm->arch.vgic.nr_irqs)
+		if (vgic_ready(dev->kvm) || dev->kvm->arch.vgic.nr_irqs)
 			ret = -EBUSY;
 		else
 			dev->kvm->arch.vgic.nr_irqs = val;
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/6] arm/arm64: KVM: Add (new) vgic_initialized macro
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 1/6] arm/arm64: KVM: vgic: move reset initialization into vgic_init_maps() Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 2/6] arm/arm64: KVM: Rename vgic_initialized to vgic_ready Christoffer Dall
@ 2014-12-13 11:17 ` Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 4/6] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized Christoffer Dall
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

Some code paths will need to check to see if the internal state of the
vgic has been initialized (such as when creating new VCPUs), so
introduce such a macro that checks the nr_cpus field which is set when
the vgic has been initialized.

Also set nr_cpus = 0 in kvm_vgic_destroy, because the error path in
vgic_init() will call this function, and code should never errornously
assume the vgic to be properly initialized after an error.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 include/kvm/arm_vgic.h | 6 ++++++
 virt/kvm/arm/vgic.c    | 3 ++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 3e262b9..ac4888d 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -287,6 +287,7 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		      struct kvm_exit_mmio *mmio);
 
 #define irqchip_in_kernel(k)	(!!((k)->arch.vgic.in_kernel))
+#define vgic_initialized(k)	(!!((k)->arch.vgic.nr_cpus))
 #define vgic_ready(k)		((k)->arch.vgic.ready)
 
 int vgic_v2_probe(struct device_node *vgic_node,
@@ -369,6 +370,11 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
 	return 0;
 }
 
+static inline bool vgic_initialized(struct kvm *kvm)
+{
+	return true;
+}
+
 static inline bool vgic_ready(struct kvm *kvm)
 {
 	return true;
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 22c8304..e1bef68 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1774,6 +1774,7 @@ void kvm_vgic_destroy(struct kvm *kvm)
 	dist->irq_spi_cpu = NULL;
 	dist->irq_spi_target = NULL;
 	dist->irq_pending_on_cpu = NULL;
+	dist->nr_cpus = 0;
 }
 
 /*
@@ -1787,7 +1788,7 @@ static int vgic_init(struct kvm *kvm)
 	int nr_cpus, nr_irqs;
 	int ret, i, vcpu_id;
 
-	if (dist->nr_cpus)	/* Already allocated */
+	if (vgic_initialized(kvm))
 		return 0;
 
 	nr_cpus = dist->nr_cpus = atomic_read(&kvm->online_vcpus);
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 4/6] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
                   ` (2 preceding siblings ...)
  2014-12-13 11:17 ` [PATCH v2 3/6] arm/arm64: KVM: Add (new) vgic_initialized macro Christoffer Dall
@ 2014-12-13 11:17 ` Christoffer Dall
  2014-12-13 11:17 ` [PATCH v2 5/6] arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs Christoffer Dall
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

When the vgic initializes its internal state it does so based on the
number of VCPUs available at the time.  If we allow KVM to create more
VCPUs after the VGIC has been initialized, we are likely to error out in
unfortunate ways later, perform buffer overflows etc.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
This replaces Eric Auger's previous patch
(https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012646.html),
because it fits better with testing to include it in this series and I
realized that we need to add a check against irqchip_in_kernel() as
well.

 arch/arm/kvm/arm.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index a9d005f..d4da244 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -213,6 +213,11 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
 	int err;
 	struct kvm_vcpu *vcpu;
 
+	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) {
+		err = -EBUSY;
+		goto out;
+	}
+
 	vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
 	if (!vcpu) {
 		err = -ENOMEM;
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 5/6] arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
                   ` (3 preceding siblings ...)
  2014-12-13 11:17 ` [PATCH v2 4/6] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized Christoffer Dall
@ 2014-12-13 11:17 ` Christoffer Dall
  2014-12-14 11:35   ` Marc Zyngier
  2014-12-13 11:17 ` [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers Christoffer Dall
  2014-12-15 10:20 ` [PATCH v3 " Christoffer Dall
  6 siblings, 1 reply; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

Userspace assumes that it can wire up IRQ injections after having
created all VCPUs and after having created the VGIC, but potentially
before starting the first VCPU.  This can currently lead to lost IRQs
because the state of that IRQ injection is not stored anywhere and we
don't return an error to userspace.

We haven't seen this problem manifest itself yet, presumably because
guests reset the devices on boot, but this could cause issues with
migration and other non-standard startup configurations.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 virt/kvm/arm/vgic.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index e1bef68..330445c 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1693,11 +1693,22 @@ out:
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
 			bool level)
 {
-	if (likely(vgic_ready(kvm)) &&
-	    vgic_update_irq_pending(kvm, cpuid, irq_num, level))
+	int ret = 0;
+
+	if (unlikely(!vgic_initialized(kvm))) {
+		mutex_lock(&kvm->lock);
+		ret = vgic_init(kvm);
+		mutex_unlock(&kvm->lock);
+
+		if (ret)
+			goto out;
+	}
+
+	if (vgic_update_irq_pending(kvm, cpuid, irq_num, level))
 		vgic_kick_vcpus(kvm);
 
-	return 0;
+out:
+	return ret;
 }
 
 static irqreturn_t vgic_maintenance_handler(int irq, void *data)
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
                   ` (4 preceding siblings ...)
  2014-12-13 11:17 ` [PATCH v2 5/6] arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs Christoffer Dall
@ 2014-12-13 11:17 ` Christoffer Dall
  2014-12-14 11:33   ` Marc Zyngier
  2014-12-15 10:20 ` [PATCH v3 " Christoffer Dall
  6 siblings, 1 reply; 14+ messages in thread
From: Christoffer Dall @ 2014-12-13 11:17 UTC (permalink / raw)
  To: linux-arm-kernel

It is curently possible to run a VM with architected timers support
without creating an in-kernel VGIC, which will result in interrupts from
the virtual timer going nowhere.

To address this issue, move the architected timers initialization to the
time when we run a VCPU for the first time, and then only initialize
(and enable) the architected timers if we have a properly created and
initialized in-kernel VGIC.

When injecting interrupts from the virtual timer to the vgic, the
current setup should ensure that this never calls an on-demand init of
the VGIC, which is the only call path that could return an error from
kvm_vgic_inject_irq(), so capture the return value and raise a warning
if there's an error there.

We also change the kvm_timer_init() function from returning an int to be
a void function, since the function always succeeds.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 arch/arm/kvm/arm.c           | 15 +++++++++++----
 include/kvm/arm_arch_timer.h |  2 +-
 virt/kvm/arm/arch_timer.c    | 15 ++++++++++-----
 3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d4da244..06f0431 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -127,8 +127,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	if (ret)
 		goto out_free_stage2_pgd;
 
-	kvm_timer_init(kvm);
-
 	/* Mark the initial VMID generation invalid */
 	kvm->arch.vmid_gen = 0;
 
@@ -424,6 +422,7 @@ static void update_vttbr(struct kvm *kvm)
 
 static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 {
+	struct kvm *kvm = vcpu->kvm;
 	int ret;
 
 	if (likely(vcpu->arch.has_run_once))
@@ -435,12 +434,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 	 * Map the VGIC hardware resources before running a vcpu the first
 	 * time on this VM.
 	 */
-	if (unlikely(!vgic_ready(vcpu->kvm))) {
-		ret = kvm_vgic_map_resources(vcpu->kvm);
+	if (unlikely(!vgic_ready(kvm))) {
+		ret = kvm_vgic_map_resources(kvm);
 		if (ret)
 			return ret;
 	}
 
+	/*
+	 * Initialize the Architected Timers only if we have an in-kernel VGIC
+	 * and it has been properly initialized, since we cannot handle
+	 * interrupts from the virtual timer with a userspace vgic.
+	 */
+	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
+		kvm_timer_init(kvm);
+
 	return 0;
 }
 
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index ad9db60..c9bd045 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -60,7 +60,7 @@ struct arch_timer_cpu {
 
 #ifdef CONFIG_KVM_ARM_TIMER
 int kvm_timer_hyp_init(void);
-int kvm_timer_init(struct kvm *kvm);
+void kvm_timer_init(struct kvm *kvm);
 void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 			  const struct kvm_irq_level *irq);
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 22fa819..48ce5cb 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
 
 static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
 {
+	int ret;
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
 	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
-	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
-			    timer->irq->irq,
-			    timer->irq->level);
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+				  timer->irq->irq,
+				  timer->irq->level);
+	WARN_ON(ret);
 }
 
 static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
@@ -307,12 +309,15 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
 	timer_disarm(timer);
 }
 
-int kvm_timer_init(struct kvm *kvm)
+void kvm_timer_init(struct kvm *kvm)
 {
+	if (kvm->arch.timer.enabled)
+		return;
+
 	if (timecounter && wqueue) {
 		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
 		kvm->arch.timer.enabled = 1;
 	}
 
-	return 0;
+	return;
 }
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-13 11:17 ` [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers Christoffer Dall
@ 2014-12-14 11:33   ` Marc Zyngier
  2014-12-14 14:14     ` Christoffer Dall
  0 siblings, 1 reply; 14+ messages in thread
From: Marc Zyngier @ 2014-12-14 11:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Dec 13 2014 at 11:17:29 AM, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> It is curently possible to run a VM with architected timers support
> without creating an in-kernel VGIC, which will result in interrupts from
> the virtual timer going nowhere.
>
> To address this issue, move the architected timers initialization to the
> time when we run a VCPU for the first time, and then only initialize
> (and enable) the architected timers if we have a properly created and
> initialized in-kernel VGIC.
>
> When injecting interrupts from the virtual timer to the vgic, the
> current setup should ensure that this never calls an on-demand init of
> the VGIC, which is the only call path that could return an error from
> kvm_vgic_inject_irq(), so capture the return value and raise a warning
> if there's an error there.
>
> We also change the kvm_timer_init() function from returning an int to be
> a void function, since the function always succeeds.
>
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
>  arch/arm/kvm/arm.c           | 15 +++++++++++----
>  include/kvm/arm_arch_timer.h |  2 +-
>  virt/kvm/arm/arch_timer.c    | 15 ++++++++++-----
>  3 files changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index d4da244..06f0431 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -127,8 +127,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  	if (ret)
>  		goto out_free_stage2_pgd;
>  
> -	kvm_timer_init(kvm);
> -
>  	/* Mark the initial VMID generation invalid */
>  	kvm->arch.vmid_gen = 0;
>  
> @@ -424,6 +422,7 @@ static void update_vttbr(struct kvm *kvm)
>  
>  static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm *kvm = vcpu->kvm;
>  	int ret;
>  
>  	if (likely(vcpu->arch.has_run_once))
> @@ -435,12 +434,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  	 * Map the VGIC hardware resources before running a vcpu the first
>  	 * time on this VM.
>  	 */
> -	if (unlikely(!vgic_ready(vcpu->kvm))) {
> -		ret = kvm_vgic_map_resources(vcpu->kvm);
> +	if (unlikely(!vgic_ready(kvm))) {
> +		ret = kvm_vgic_map_resources(kvm);
>  		if (ret)
>  			return ret;
>  	}
>  
> +	/*
> +	 * Initialize the Architected Timers only if we have an in-kernel VGIC
> +	 * and it has been properly initialized, since we cannot handle
> +	 * interrupts from the virtual timer with a userspace vgic.
> +	 */
> +	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
> +		kvm_timer_init(kvm);
> +
>  	return 0;
>  }
>  
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index ad9db60..c9bd045 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -60,7 +60,7 @@ struct arch_timer_cpu {
>  
>  #ifdef CONFIG_KVM_ARM_TIMER
>  int kvm_timer_hyp_init(void);
> -int kvm_timer_init(struct kvm *kvm);
> +void kvm_timer_init(struct kvm *kvm);
>  void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  			  const struct kvm_irq_level *irq);
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 22fa819..48ce5cb 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
>  
>  static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
>  {
> +	int ret;
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
>  	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
> -	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> -			    timer->irq->irq,
> -			    timer->irq->level);
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> +				  timer->irq->irq,
> +				  timer->irq->level);
> +	WARN_ON(ret);
>  }
>  
>  static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
> @@ -307,12 +309,15 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
>  	timer_disarm(timer);
>  }
>  
> -int kvm_timer_init(struct kvm *kvm)
> +void kvm_timer_init(struct kvm *kvm)
>  {
> +	if (kvm->arch.timer.enabled)
> +		return;
> +
>  	if (timecounter && wqueue) {
>  		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>  		kvm->arch.timer.enabled = 1;
>  	}

Be careful, you've now introduced a race between two vcpus doing their
"first run" at the same time. The consequence is fairly minor (only the
virtual offset is affected, and that's unlikely to cause any ill effect
that early in the life of the VM), but still.

We can decide that this is not big enough a deal to warrant a lock, but
that definitely deserves a comment.

Another thing to consider is how this works with restoring a VM. We
relied on the fact that CNTVOFF is set by system register accesses after
the timer init, but we're now overriding the value. Am I missing
something crucial?

>  
> -	return 0;
> +	return;
>  }

Don't bother with the return ;-).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 5/6] arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs
  2014-12-13 11:17 ` [PATCH v2 5/6] arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs Christoffer Dall
@ 2014-12-14 11:35   ` Marc Zyngier
  0 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2014-12-14 11:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Dec 13 2014 at 11:17:28 AM, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> Userspace assumes that it can wire up IRQ injections after having
> created all VCPUs and after having created the VGIC, but potentially
> before starting the first VCPU.  This can currently lead to lost IRQs
> because the state of that IRQ injection is not stored anywhere and we
> don't return an error to userspace.
>
> We haven't seen this problem manifest itself yet, presumably because
> guests reset the devices on boot, but this could cause issues with
> migration and other non-standard startup configurations.
>
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
>  virt/kvm/arm/vgic.c | 17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index e1bef68..330445c 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -1693,11 +1693,22 @@ out:
>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
>  			bool level)
>  {
> -	if (likely(vgic_ready(kvm)) &&
> -	    vgic_update_irq_pending(kvm, cpuid, irq_num, level))
> +	int ret = 0;
> +
> +	if (unlikely(!vgic_initialized(kvm))) {
> +		mutex_lock(&kvm->lock);
> +		ret = vgic_init(kvm);
> +		mutex_unlock(&kvm->lock);
> +
> +		if (ret)
> +			goto out;
> +	}
> +
> +	if (vgic_update_irq_pending(kvm, cpuid, irq_num, level))
>  		vgic_kick_vcpus(kvm);
>  
> -	return 0;
> +out:
> +	return ret;
>  }
>  
>  static irqreturn_t vgic_maintenance_handler(int irq, void *data)

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-14 11:33   ` Marc Zyngier
@ 2014-12-14 14:14     ` Christoffer Dall
  2014-12-15  8:57       ` Marc Zyngier
  0 siblings, 1 reply; 14+ messages in thread
From: Christoffer Dall @ 2014-12-14 14:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Dec 14, 2014 at 11:33:04AM +0000, Marc Zyngier wrote:
> On Sat, Dec 13 2014 at 11:17:29 AM, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > It is curently possible to run a VM with architected timers support
> > without creating an in-kernel VGIC, which will result in interrupts from
> > the virtual timer going nowhere.
> >
> > To address this issue, move the architected timers initialization to the
> > time when we run a VCPU for the first time, and then only initialize
> > (and enable) the architected timers if we have a properly created and
> > initialized in-kernel VGIC.
> >
> > When injecting interrupts from the virtual timer to the vgic, the
> > current setup should ensure that this never calls an on-demand init of
> > the VGIC, which is the only call path that could return an error from
> > kvm_vgic_inject_irq(), so capture the return value and raise a warning
> > if there's an error there.
> >
> > We also change the kvm_timer_init() function from returning an int to be
> > a void function, since the function always succeeds.
> >
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> >  arch/arm/kvm/arm.c           | 15 +++++++++++----
> >  include/kvm/arm_arch_timer.h |  2 +-
> >  virt/kvm/arm/arch_timer.c    | 15 ++++++++++-----
> >  3 files changed, 22 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index d4da244..06f0431 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -127,8 +127,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> >  	if (ret)
> >  		goto out_free_stage2_pgd;
> >  
> > -	kvm_timer_init(kvm);
> > -
> >  	/* Mark the initial VMID generation invalid */
> >  	kvm->arch.vmid_gen = 0;
> >  
> > @@ -424,6 +422,7 @@ static void update_vttbr(struct kvm *kvm)
> >  
> >  static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >  {
> > +	struct kvm *kvm = vcpu->kvm;
> >  	int ret;
> >  
> >  	if (likely(vcpu->arch.has_run_once))
> > @@ -435,12 +434,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >  	 * Map the VGIC hardware resources before running a vcpu the first
> >  	 * time on this VM.
> >  	 */
> > -	if (unlikely(!vgic_ready(vcpu->kvm))) {
> > -		ret = kvm_vgic_map_resources(vcpu->kvm);
> > +	if (unlikely(!vgic_ready(kvm))) {
> > +		ret = kvm_vgic_map_resources(kvm);
> >  		if (ret)
> >  			return ret;
> >  	}
> >  
> > +	/*
> > +	 * Initialize the Architected Timers only if we have an in-kernel VGIC
> > +	 * and it has been properly initialized, since we cannot handle
> > +	 * interrupts from the virtual timer with a userspace vgic.
> > +	 */
> > +	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
> > +		kvm_timer_init(kvm);
> > +
> >  	return 0;
> >  }
> >  
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index ad9db60..c9bd045 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -60,7 +60,7 @@ struct arch_timer_cpu {
> >  
> >  #ifdef CONFIG_KVM_ARM_TIMER
> >  int kvm_timer_hyp_init(void);
> > -int kvm_timer_init(struct kvm *kvm);
> > +void kvm_timer_init(struct kvm *kvm);
> >  void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  			  const struct kvm_irq_level *irq);
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 22fa819..48ce5cb 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
> >  
> >  static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
> >  {
> > +	int ret;
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> >  	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
> > -	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> > -			    timer->irq->irq,
> > -			    timer->irq->level);
> > +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> > +				  timer->irq->irq,
> > +				  timer->irq->level);
> > +	WARN_ON(ret);
> >  }
> >  
> >  static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
> > @@ -307,12 +309,15 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
> >  	timer_disarm(timer);
> >  }
> >  
> > -int kvm_timer_init(struct kvm *kvm)
> > +void kvm_timer_init(struct kvm *kvm)
> >  {
> > +	if (kvm->arch.timer.enabled)
> > +		return;
> > +
> >  	if (timecounter && wqueue) {
> >  		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> >  		kvm->arch.timer.enabled = 1;
> >  	}
> 
> Be careful, you've now introduced a race between two vcpus doing their
> "first run" at the same time. The consequence is fairly minor (only the
> virtual offset is affected, and that's unlikely to cause any ill effect
> that early in the life of the VM), but still.
> 
> We can decide that this is not big enough a deal to warrant a lock, but
> that definitely deserves a comment.

That escaped my attention completely, thanks for spotting it.

> 
> Another thing to consider is how this works with restoring a VM. We
> relied on the fact that CNTVOFF is set by system register accesses after
> the timer init, but we're now overriding the value. Am I missing
> something crucial?
> 

The answer to how it works is: It doesn't.

So the cleanest approach would be to initialize the cntvoff when
creating the VM, then enabling it at first vcpu run, adding a comment
saying that enabling the timer multiple times doesn't hurt.

Consequently adding a kvm_timer_enable() function that gets called on
first vcpu_run ?

> >  
> > -	return 0;
> > +	return;
> >  }
> 
> Don't bother with the return ;-).
> 
Right, pattern matching is my thing. ;)

-Christoffer

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-14 14:14     ` Christoffer Dall
@ 2014-12-15  8:57       ` Marc Zyngier
  0 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2014-12-15  8:57 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/12/14 14:14, Christoffer Dall wrote:
> On Sun, Dec 14, 2014 at 11:33:04AM +0000, Marc Zyngier wrote:
>> On Sat, Dec 13 2014 at 11:17:29 AM, Christoffer Dall <christoffer.dall@linaro.org> wrote:
>>> It is curently possible to run a VM with architected timers support
>>> without creating an in-kernel VGIC, which will result in interrupts from
>>> the virtual timer going nowhere.
>>>
>>> To address this issue, move the architected timers initialization to the
>>> time when we run a VCPU for the first time, and then only initialize
>>> (and enable) the architected timers if we have a properly created and
>>> initialized in-kernel VGIC.
>>>
>>> When injecting interrupts from the virtual timer to the vgic, the
>>> current setup should ensure that this never calls an on-demand init of
>>> the VGIC, which is the only call path that could return an error from
>>> kvm_vgic_inject_irq(), so capture the return value and raise a warning
>>> if there's an error there.
>>>
>>> We also change the kvm_timer_init() function from returning an int to be
>>> a void function, since the function always succeeds.
>>>
>>> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
>>> ---
>>>  arch/arm/kvm/arm.c           | 15 +++++++++++----
>>>  include/kvm/arm_arch_timer.h |  2 +-
>>>  virt/kvm/arm/arch_timer.c    | 15 ++++++++++-----
>>>  3 files changed, 22 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index d4da244..06f0431 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -127,8 +127,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>>>  	if (ret)
>>>  		goto out_free_stage2_pgd;
>>>  
>>> -	kvm_timer_init(kvm);
>>> -
>>>  	/* Mark the initial VMID generation invalid */
>>>  	kvm->arch.vmid_gen = 0;
>>>  
>>> @@ -424,6 +422,7 @@ static void update_vttbr(struct kvm *kvm)
>>>  
>>>  static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>>>  {
>>> +	struct kvm *kvm = vcpu->kvm;
>>>  	int ret;
>>>  
>>>  	if (likely(vcpu->arch.has_run_once))
>>> @@ -435,12 +434,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>>>  	 * Map the VGIC hardware resources before running a vcpu the first
>>>  	 * time on this VM.
>>>  	 */
>>> -	if (unlikely(!vgic_ready(vcpu->kvm))) {
>>> -		ret = kvm_vgic_map_resources(vcpu->kvm);
>>> +	if (unlikely(!vgic_ready(kvm))) {
>>> +		ret = kvm_vgic_map_resources(kvm);
>>>  		if (ret)
>>>  			return ret;
>>>  	}
>>>  
>>> +	/*
>>> +	 * Initialize the Architected Timers only if we have an in-kernel VGIC
>>> +	 * and it has been properly initialized, since we cannot handle
>>> +	 * interrupts from the virtual timer with a userspace vgic.
>>> +	 */
>>> +	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
>>> +		kvm_timer_init(kvm);
>>> +
>>>  	return 0;
>>>  }
>>>  
>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>> index ad9db60..c9bd045 100644
>>> --- a/include/kvm/arm_arch_timer.h
>>> +++ b/include/kvm/arm_arch_timer.h
>>> @@ -60,7 +60,7 @@ struct arch_timer_cpu {
>>>  
>>>  #ifdef CONFIG_KVM_ARM_TIMER
>>>  int kvm_timer_hyp_init(void);
>>> -int kvm_timer_init(struct kvm *kvm);
>>> +void kvm_timer_init(struct kvm *kvm);
>>>  void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>  			  const struct kvm_irq_level *irq);
>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 22fa819..48ce5cb 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
>>>  
>>>  static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
>>>  {
>>> +	int ret;
>>>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>  
>>>  	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
>>> -	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>> -			    timer->irq->irq,
>>> -			    timer->irq->level);
>>> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>> +				  timer->irq->irq,
>>> +				  timer->irq->level);
>>> +	WARN_ON(ret);
>>>  }
>>>  
>>>  static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
>>> @@ -307,12 +309,15 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
>>>  	timer_disarm(timer);
>>>  }
>>>  
>>> -int kvm_timer_init(struct kvm *kvm)
>>> +void kvm_timer_init(struct kvm *kvm)
>>>  {
>>> +	if (kvm->arch.timer.enabled)
>>> +		return;
>>> +
>>>  	if (timecounter && wqueue) {
>>>  		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>>>  		kvm->arch.timer.enabled = 1;
>>>  	}
>>
>> Be careful, you've now introduced a race between two vcpus doing their
>> "first run" at the same time. The consequence is fairly minor (only the
>> virtual offset is affected, and that's unlikely to cause any ill effect
>> that early in the life of the VM), but still.
>>
>> We can decide that this is not big enough a deal to warrant a lock, but
>> that definitely deserves a comment.
> 
> That escaped my attention completely, thanks for spotting it.
> 
>>
>> Another thing to consider is how this works with restoring a VM. We
>> relied on the fact that CNTVOFF is set by system register accesses after
>> the timer init, but we're now overriding the value. Am I missing
>> something crucial?
>>
> 
> The answer to how it works is: It doesn't.
> 
> So the cleanest approach would be to initialize the cntvoff when
> creating the VM, then enabling it at first vcpu run, adding a comment
> saying that enabling the timer multiple times doesn't hurt.
> 
> Consequently adding a kvm_timer_enable() function that gets called on
> first vcpu_run ?

Seems like a sensible idea. You could even make the "enable" part of
kvm_timer_vcpu_init(), and move that call to the "first run" section. Up
to you, really.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
                   ` (5 preceding siblings ...)
  2014-12-13 11:17 ` [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers Christoffer Dall
@ 2014-12-15 10:20 ` Christoffer Dall
  2014-12-15 10:39   ` Marc Zyngier
  6 siblings, 1 reply; 14+ messages in thread
From: Christoffer Dall @ 2014-12-15 10:20 UTC (permalink / raw)
  To: linux-arm-kernel

It is curently possible to run a VM with architected timers support
without creating an in-kernel VGIC, which will result in interrupts from
the virtual timer going nowhere.

To address this issue, move the architected timers initialization to the
time when we run a VCPU for the first time, and then only initialize
(and enable) the architected timers if we have a properly created and
initialized in-kernel VGIC.

When injecting interrupts from the virtual timer to the vgic, the
current setup should ensure that this never calls an on-demand init of
the VGIC, which is the only call path that could return an error from
kvm_vgic_inject_irq(), so capture the return value and raise a warning
if there's an error there.

We also change the kvm_timer_init() function from returning an int to be
a void function, since the function always succeeds.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
Changes [v2 -> v3]:
 - Split kvm_timer_init into kvm_timer_init and kvm_timer_enable
   and initialize the cntvoff in kvm_timer_init and only actually enable
   the timer if there is an in-kernel vgic.
 - Added comment about race from multiple VCPUs.
 - Support compiling on 32-bit ARM wihtout vgic/arch-timers config
   option.

 arch/arm/kvm/arm.c           | 13 +++++++++++--
 include/kvm/arm_arch_timer.h | 10 ++++------
 virt/kvm/arm/arch_timer.c    | 30 ++++++++++++++++++++++--------
 3 files changed, 37 insertions(+), 16 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d4da244..8cadfec 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -424,6 +424,7 @@ static void update_vttbr(struct kvm *kvm)
 
 static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 {
+	struct kvm *kvm = vcpu->kvm;
 	int ret;
 
 	if (likely(vcpu->arch.has_run_once))
@@ -435,12 +436,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 	 * Map the VGIC hardware resources before running a vcpu the first
 	 * time on this VM.
 	 */
-	if (unlikely(!vgic_ready(vcpu->kvm))) {
-		ret = kvm_vgic_map_resources(vcpu->kvm);
+	if (unlikely(!vgic_ready(kvm))) {
+		ret = kvm_vgic_map_resources(kvm);
 		if (ret)
 			return ret;
 	}
 
+	/*
+	 * Enable the arch timers only if we have an in-kernel VGIC
+	 * and it has been properly initialized, since we cannot handle
+	 * interrupts from the virtual timer with a userspace gic.
+	 */
+	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
+		kvm_timer_enable(kvm);
+
 	return 0;
 }
 
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index ad9db60..b3f45a5 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -60,7 +60,8 @@ struct arch_timer_cpu {
 
 #ifdef CONFIG_KVM_ARM_TIMER
 int kvm_timer_hyp_init(void);
-int kvm_timer_init(struct kvm *kvm);
+void kvm_timer_enable(struct kvm *kvm);
+void kvm_timer_init(struct kvm *kvm);
 void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 			  const struct kvm_irq_level *irq);
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
@@ -77,11 +78,8 @@ static inline int kvm_timer_hyp_init(void)
 	return 0;
 };
 
-static inline int kvm_timer_init(struct kvm *kvm)
-{
-	return 0;
-}
-
+static inline void kvm_timer_enable(struct kvm *kvm) {}
+static inline void kvm_timer_init(struct kvm *kvm) {}
 static inline void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 					const struct kvm_irq_level *irq) {}
 static inline void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) {}
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 22fa819..1c0772b 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
 
 static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
 {
+	int ret;
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
 	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
-	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
-			    timer->irq->irq,
-			    timer->irq->level);
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+				  timer->irq->irq,
+				  timer->irq->level);
+	WARN_ON(ret);
 }
 
 static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
@@ -307,12 +309,24 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
 	timer_disarm(timer);
 }
 
-int kvm_timer_init(struct kvm *kvm)
+void kvm_timer_enable(struct kvm *kvm)
 {
-	if (timecounter && wqueue) {
-		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
+	if (kvm->arch.timer.enabled)
+		return;
+
+	/*
+	 * There is a potential race here between VCPUs starting for the first
+	 * time, which may be enabling the timer multiple times.  That doesn't
+	 * hurt though, because we're just setting a variable to the same
+	 * variable that it already was.  The important thing is that all
+	 * VCPUs have the enabled variable set, before entering the guest, if
+	 * the arch timers are enabled.
+	 */
+	if (timecounter && wqueue)
 		kvm->arch.timer.enabled = 1;
-	}
+}
 
-	return 0;
+void kvm_timer_init(struct kvm *kvm)
+{
+	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
 }
-- 
2.1.2.330.g565301e.dirty

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-15 10:20 ` [PATCH v3 " Christoffer Dall
@ 2014-12-15 10:39   ` Marc Zyngier
  2014-12-15 10:50     ` Christoffer Dall
  0 siblings, 1 reply; 14+ messages in thread
From: Marc Zyngier @ 2014-12-15 10:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 15/12/14 10:20, Christoffer Dall wrote:
> It is curently possible to run a VM with architected timers support
> without creating an in-kernel VGIC, which will result in interrupts from
> the virtual timer going nowhere.
> 
> To address this issue, move the architected timers initialization to the
> time when we run a VCPU for the first time, and then only initialize
> (and enable) the architected timers if we have a properly created and
> initialized in-kernel VGIC.
> 
> When injecting interrupts from the virtual timer to the vgic, the
> current setup should ensure that this never calls an on-demand init of
> the VGIC, which is the only call path that could return an error from
> kvm_vgic_inject_irq(), so capture the return value and raise a warning
> if there's an error there.
> 
> We also change the kvm_timer_init() function from returning an int to be
> a void function, since the function always succeeds.
> 
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
> Changes [v2 -> v3]:
>  - Split kvm_timer_init into kvm_timer_init and kvm_timer_enable
>    and initialize the cntvoff in kvm_timer_init and only actually enable
>    the timer if there is an in-kernel vgic.
>  - Added comment about race from multiple VCPUs.
>  - Support compiling on 32-bit ARM wihtout vgic/arch-timers config
>    option.
> 
>  arch/arm/kvm/arm.c           | 13 +++++++++++--
>  include/kvm/arm_arch_timer.h | 10 ++++------
>  virt/kvm/arm/arch_timer.c    | 30 ++++++++++++++++++++++--------
>  3 files changed, 37 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index d4da244..8cadfec 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -424,6 +424,7 @@ static void update_vttbr(struct kvm *kvm)
>  
>  static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm *kvm = vcpu->kvm;
>  	int ret;
>  
>  	if (likely(vcpu->arch.has_run_once))
> @@ -435,12 +436,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  	 * Map the VGIC hardware resources before running a vcpu the first
>  	 * time on this VM.
>  	 */
> -	if (unlikely(!vgic_ready(vcpu->kvm))) {
> -		ret = kvm_vgic_map_resources(vcpu->kvm);
> +	if (unlikely(!vgic_ready(kvm))) {
> +		ret = kvm_vgic_map_resources(kvm);
>  		if (ret)
>  			return ret;
>  	}
>  
> +	/*
> +	 * Enable the arch timers only if we have an in-kernel VGIC
> +	 * and it has been properly initialized, since we cannot handle
> +	 * interrupts from the virtual timer with a userspace gic.
> +	 */
> +	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
> +		kvm_timer_enable(kvm);
> +
>  	return 0;
>  }
>  
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index ad9db60..b3f45a5 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -60,7 +60,8 @@ struct arch_timer_cpu {
>  
>  #ifdef CONFIG_KVM_ARM_TIMER
>  int kvm_timer_hyp_init(void);
> -int kvm_timer_init(struct kvm *kvm);
> +void kvm_timer_enable(struct kvm *kvm);
> +void kvm_timer_init(struct kvm *kvm);
>  void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  			  const struct kvm_irq_level *irq);
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> @@ -77,11 +78,8 @@ static inline int kvm_timer_hyp_init(void)
>  	return 0;
>  };
>  
> -static inline int kvm_timer_init(struct kvm *kvm)
> -{
> -	return 0;
> -}
> -
> +static inline void kvm_timer_enable(struct kvm *kvm) {}
> +static inline void kvm_timer_init(struct kvm *kvm) {}
>  static inline void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  					const struct kvm_irq_level *irq) {}
>  static inline void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) {}
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 22fa819..1c0772b 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
>  
>  static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
>  {
> +	int ret;
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
>  	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
> -	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> -			    timer->irq->irq,
> -			    timer->irq->level);
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> +				  timer->irq->irq,
> +				  timer->irq->level);
> +	WARN_ON(ret);
>  }
>  
>  static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
> @@ -307,12 +309,24 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
>  	timer_disarm(timer);
>  }
>  
> -int kvm_timer_init(struct kvm *kvm)
> +void kvm_timer_enable(struct kvm *kvm)
>  {
> -	if (timecounter && wqueue) {
> -		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> +	if (kvm->arch.timer.enabled)
> +		return;
> +
> +	/*
> +	 * There is a potential race here between VCPUs starting for the first
> +	 * time, which may be enabling the timer multiple times.  That doesn't
> +	 * hurt though, because we're just setting a variable to the same
> +	 * variable that it already was.  The important thing is that all
> +	 * VCPUs have the enabled variable set, before entering the guest, if
> +	 * the arch timers are enabled.
> +	 */
> +	if (timecounter && wqueue)
>  		kvm->arch.timer.enabled = 1;
> -	}
> +}

This is particularly interesting, as this paves the way for per-vcpu
enable bits, meaning that we won't have to extract the enable bit from
struct kvm while doing a world switch. Clearly a fight for another day
though.

>  
> -	return 0;
> +void kvm_timer_init(struct kvm *kvm)
> +{
> +	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>  }
> 

Looks good to me.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers
  2014-12-15 10:39   ` Marc Zyngier
@ 2014-12-15 10:50     ` Christoffer Dall
  0 siblings, 0 replies; 14+ messages in thread
From: Christoffer Dall @ 2014-12-15 10:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 15, 2014 at 10:39:23AM +0000, Marc Zyngier wrote:
> On 15/12/14 10:20, Christoffer Dall wrote:
> > It is curently possible to run a VM with architected timers support
> > without creating an in-kernel VGIC, which will result in interrupts from
> > the virtual timer going nowhere.
> > 
> > To address this issue, move the architected timers initialization to the
> > time when we run a VCPU for the first time, and then only initialize
> > (and enable) the architected timers if we have a properly created and
> > initialized in-kernel VGIC.
> > 
> > When injecting interrupts from the virtual timer to the vgic, the
> > current setup should ensure that this never calls an on-demand init of
> > the VGIC, which is the only call path that could return an error from
> > kvm_vgic_inject_irq(), so capture the return value and raise a warning
> > if there's an error there.
> > 
> > We also change the kvm_timer_init() function from returning an int to be
> > a void function, since the function always succeeds.
> > 
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> > Changes [v2 -> v3]:
> >  - Split kvm_timer_init into kvm_timer_init and kvm_timer_enable
> >    and initialize the cntvoff in kvm_timer_init and only actually enable
> >    the timer if there is an in-kernel vgic.
> >  - Added comment about race from multiple VCPUs.
> >  - Support compiling on 32-bit ARM wihtout vgic/arch-timers config
> >    option.
> > 
> >  arch/arm/kvm/arm.c           | 13 +++++++++++--
> >  include/kvm/arm_arch_timer.h | 10 ++++------
> >  virt/kvm/arm/arch_timer.c    | 30 ++++++++++++++++++++++--------
> >  3 files changed, 37 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index d4da244..8cadfec 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -424,6 +424,7 @@ static void update_vttbr(struct kvm *kvm)
> >  
> >  static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >  {
> > +	struct kvm *kvm = vcpu->kvm;
> >  	int ret;
> >  
> >  	if (likely(vcpu->arch.has_run_once))
> > @@ -435,12 +436,20 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >  	 * Map the VGIC hardware resources before running a vcpu the first
> >  	 * time on this VM.
> >  	 */
> > -	if (unlikely(!vgic_ready(vcpu->kvm))) {
> > -		ret = kvm_vgic_map_resources(vcpu->kvm);
> > +	if (unlikely(!vgic_ready(kvm))) {
> > +		ret = kvm_vgic_map_resources(kvm);
> >  		if (ret)
> >  			return ret;
> >  	}
> >  
> > +	/*
> > +	 * Enable the arch timers only if we have an in-kernel VGIC
> > +	 * and it has been properly initialized, since we cannot handle
> > +	 * interrupts from the virtual timer with a userspace gic.
> > +	 */
> > +	if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
> > +		kvm_timer_enable(kvm);
> > +
> >  	return 0;
> >  }
> >  
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index ad9db60..b3f45a5 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -60,7 +60,8 @@ struct arch_timer_cpu {
> >  
> >  #ifdef CONFIG_KVM_ARM_TIMER
> >  int kvm_timer_hyp_init(void);
> > -int kvm_timer_init(struct kvm *kvm);
> > +void kvm_timer_enable(struct kvm *kvm);
> > +void kvm_timer_init(struct kvm *kvm);
> >  void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  			  const struct kvm_irq_level *irq);
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> > @@ -77,11 +78,8 @@ static inline int kvm_timer_hyp_init(void)
> >  	return 0;
> >  };
> >  
> > -static inline int kvm_timer_init(struct kvm *kvm)
> > -{
> > -	return 0;
> > -}
> > -
> > +static inline void kvm_timer_enable(struct kvm *kvm) {}
> > +static inline void kvm_timer_init(struct kvm *kvm) {}
> >  static inline void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  					const struct kvm_irq_level *irq) {}
> >  static inline void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) {}
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 22fa819..1c0772b 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
> >  
> >  static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
> >  {
> > +	int ret;
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> >  	timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
> > -	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> > -			    timer->irq->irq,
> > -			    timer->irq->level);
> > +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> > +				  timer->irq->irq,
> > +				  timer->irq->level);
> > +	WARN_ON(ret);
> >  }
> >  
> >  static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
> > @@ -307,12 +309,24 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
> >  	timer_disarm(timer);
> >  }
> >  
> > -int kvm_timer_init(struct kvm *kvm)
> > +void kvm_timer_enable(struct kvm *kvm)
> >  {
> > -	if (timecounter && wqueue) {
> > -		kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> > +	if (kvm->arch.timer.enabled)
> > +		return;
> > +
> > +	/*
> > +	 * There is a potential race here between VCPUs starting for the first
> > +	 * time, which may be enabling the timer multiple times.  That doesn't
> > +	 * hurt though, because we're just setting a variable to the same
> > +	 * variable that it already was.  The important thing is that all
> > +	 * VCPUs have the enabled variable set, before entering the guest, if
> > +	 * the arch timers are enabled.
> > +	 */
> > +	if (timecounter && wqueue)
> >  		kvm->arch.timer.enabled = 1;
> > -	}
> > +}
> 
> This is particularly interesting, as this paves the way for per-vcpu
> enable bits, meaning that we won't have to extract the enable bit from
> struct kvm while doing a world switch. Clearly a fight for another day
> though.
> 

I didn't think of that, but sounds good.

> >  
> > -	return 0;
> > +void kvm_timer_init(struct kvm *kvm)
> > +{
> > +	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> >  }
> > 
> 
> Looks good to me.
> 
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> 
Thanks, I'll go ahead and get these series merged then.

-Christoffer

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-12-15 10:50 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-12-13 11:17 [PATCH v2 0/6] Fix vgic initialization problems Christoffer Dall
2014-12-13 11:17 ` [PATCH v2 1/6] arm/arm64: KVM: vgic: move reset initialization into vgic_init_maps() Christoffer Dall
2014-12-13 11:17 ` [PATCH v2 2/6] arm/arm64: KVM: Rename vgic_initialized to vgic_ready Christoffer Dall
2014-12-13 11:17 ` [PATCH v2 3/6] arm/arm64: KVM: Add (new) vgic_initialized macro Christoffer Dall
2014-12-13 11:17 ` [PATCH v2 4/6] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized Christoffer Dall
2014-12-13 11:17 ` [PATCH v2 5/6] arm/arm64: KVM: Initialize the vgic on-demand when injecting IRQs Christoffer Dall
2014-12-14 11:35   ` Marc Zyngier
2014-12-13 11:17 ` [PATCH v2 6/6] arm/arm64: KVM: Require in-kernel vgic for the arch timers Christoffer Dall
2014-12-14 11:33   ` Marc Zyngier
2014-12-14 14:14     ` Christoffer Dall
2014-12-15  8:57       ` Marc Zyngier
2014-12-15 10:20 ` [PATCH v3 " Christoffer Dall
2014-12-15 10:39   ` Marc Zyngier
2014-12-15 10:50     ` Christoffer Dall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).