All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 00/10] Provide the EL1 physical timer to the VM
@ 2017-01-27  1:04 ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

The ARM architecture defines the EL1 physical timer and the virtual timer,
and it is reasonable for an OS to expect to be able to access both.
However, the current KVM implementation does not provide the EL1 physical
timer to VMs but terminates VMs on access to the timer.

This patch series enables VMs to use the EL1 physical timer through
trap-and-emulate.  The KVM host emulates each EL1 physical timer register
access and sets up the background timer accordingly.  When the background
timer expires, the KVM host injects EL1 physical timer interrupts to the
VM.  Alternatively, it's also possible to allow VMs to access the EL1
physical timer without trapping.  However, this requires somehow using the
EL2 physical timer for the Linux host while running the VM instead of the
EL1 physical timer.  Right now I just implemented trap-and-emulate because
this was straightforward to do, and I leave it to future work to determine
if transferring the EL1 physical timer state to the EL2 timer provides any
performance benefit.

This feature will be useful for any OS that wishes to access the EL1
physical timer. Nested virtualization is one of those use cases. A nested
hypervisor running inside a VM would think it has full access to the
hardware and naturally tries to use the EL1 physical timer as Linux would
do. Other nested hypervisors may try to use the EL2 physical timer as Xen
would do, but supporting the EL2 physical timer to the VM is out of scope
of this patch series. This patch series will make it easy to add the EL2
timer support in the future, though.

Note that Linux VMs booting in EL1 will be unaffected by this patch series
and will continue to use only the virtual timer and this patch series will
therefore not introduce any performance degredation as a result of
trap-and-emulate.

v1 => v2:
 - Rebase on kvm-arm-for-4.10-rc4
 - To make it simple, schedule the background timer for the EL1 physical timer
   emulation on every entry to the VM and cancel it on exit.
 - Change timer_context structure to have cntvoff and restore enable field back
   to arch_timer_cpu structure

Jintack Lim (10):
  KVM: arm/arm64: Abstract virtual timer context into separate structure
  KVM: arm/arm64: Move cntvoff to each timer context
  KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  KVM: arm/arm64: Add the EL1 physical timer context
  KVM: arm/arm64: Initialize the emulated EL1 physical timer
  KVM: arm/arm64: Update the physical timer interrupt level
  KVM: arm/arm64: Set a background timer to the earliest timer
    expiration
  KVM: arm/arm64: Set up a background timer for the physical timer
    emulation
  KVM: arm64: Add the EL1 physical timer access handler
  KVM: arm/arm64: Emulate the EL1 phys timer register access

 arch/arm/include/asm/kvm_host.h   |   6 +-
 arch/arm/kvm/arm.c                |   3 +-
 arch/arm/kvm/reset.c              |   9 +-
 arch/arm64/include/asm/kvm_host.h |   4 +-
 arch/arm64/kvm/reset.c            |   9 +-
 arch/arm64/kvm/sys_regs.c         |  60 +++++++++++
 include/kvm/arm_arch_timer.h      |  39 ++++----
 virt/kvm/arm/arch_timer.c         | 204 ++++++++++++++++++++++++++++----------
 virt/kvm/arm/hyp/timer-sr.c       |  13 +--
 9 files changed, 261 insertions(+), 86 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 00/10] Provide the EL1 physical timer to the VM
@ 2017-01-27  1:04 ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

The ARM architecture defines the EL1 physical timer and the virtual timer,
and it is reasonable for an OS to expect to be able to access both.
However, the current KVM implementation does not provide the EL1 physical
timer to VMs but terminates VMs on access to the timer.

This patch series enables VMs to use the EL1 physical timer through
trap-and-emulate.  The KVM host emulates each EL1 physical timer register
access and sets up the background timer accordingly.  When the background
timer expires, the KVM host injects EL1 physical timer interrupts to the
VM.  Alternatively, it's also possible to allow VMs to access the EL1
physical timer without trapping.  However, this requires somehow using the
EL2 physical timer for the Linux host while running the VM instead of the
EL1 physical timer.  Right now I just implemented trap-and-emulate because
this was straightforward to do, and I leave it to future work to determine
if transferring the EL1 physical timer state to the EL2 timer provides any
performance benefit.

This feature will be useful for any OS that wishes to access the EL1
physical timer. Nested virtualization is one of those use cases. A nested
hypervisor running inside a VM would think it has full access to the
hardware and naturally tries to use the EL1 physical timer as Linux would
do. Other nested hypervisors may try to use the EL2 physical timer as Xen
would do, but supporting the EL2 physical timer to the VM is out of scope
of this patch series. This patch series will make it easy to add the EL2
timer support in the future, though.

Note that Linux VMs booting in EL1 will be unaffected by this patch series
and will continue to use only the virtual timer and this patch series will
therefore not introduce any performance degredation as a result of
trap-and-emulate.

v1 => v2:
 - Rebase on kvm-arm-for-4.10-rc4
 - To make it simple, schedule the background timer for the EL1 physical timer
   emulation on every entry to the VM and cancel it on exit.
 - Change timer_context structure to have cntvoff and restore enable field back
   to arch_timer_cpu structure

Jintack Lim (10):
  KVM: arm/arm64: Abstract virtual timer context into separate structure
  KVM: arm/arm64: Move cntvoff to each timer context
  KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  KVM: arm/arm64: Add the EL1 physical timer context
  KVM: arm/arm64: Initialize the emulated EL1 physical timer
  KVM: arm/arm64: Update the physical timer interrupt level
  KVM: arm/arm64: Set a background timer to the earliest timer
    expiration
  KVM: arm/arm64: Set up a background timer for the physical timer
    emulation
  KVM: arm64: Add the EL1 physical timer access handler
  KVM: arm/arm64: Emulate the EL1 phys timer register access

 arch/arm/include/asm/kvm_host.h   |   6 +-
 arch/arm/kvm/arm.c                |   3 +-
 arch/arm/kvm/reset.c              |   9 +-
 arch/arm64/include/asm/kvm_host.h |   4 +-
 arch/arm64/kvm/reset.c            |   9 +-
 arch/arm64/kvm/sys_regs.c         |  60 +++++++++++
 include/kvm/arm_arch_timer.h      |  39 ++++----
 virt/kvm/arm/arch_timer.c         | 204 ++++++++++++++++++++++++++++----------
 virt/kvm/arm/hyp/timer-sr.c       |  13 +--
 9 files changed, 261 insertions(+), 86 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 01/10] KVM: arm/arm64: Abstract virtual timer context into separate structure
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Abstract virtual timer context into a separate structure and change all
callers referring to timer registers, irq state and so on. No change in
functionality.

This is about to become very handy when adding the EL1 physical timer.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 include/kvm/arm_arch_timer.h | 27 ++++++++---------
 virt/kvm/arm/arch_timer.c    | 69 +++++++++++++++++++++++---------------------
 virt/kvm/arm/hyp/timer-sr.c  | 10 ++++---
 3 files changed, 56 insertions(+), 50 deletions(-)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 5c970ce..daad3c1 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -28,15 +28,20 @@ struct arch_timer_kvm {
 	u64			cntvoff;
 };
 
-struct arch_timer_cpu {
+struct arch_timer_context {
 	/* Registers: control register, timer value */
-	u32				cntv_ctl;	/* Saved/restored */
-	u64				cntv_cval;	/* Saved/restored */
+	u32				cnt_ctl;
+	u64				cnt_cval;
+
+	/* Timer IRQ */
+	struct kvm_irq_level		irq;
+
+	/* Active IRQ state caching */
+	bool				active_cleared_last;
+};
 
-	/*
-	 * Anything that is not used directly from assembly code goes
-	 * here.
-	 */
+struct arch_timer_cpu {
+	struct arch_timer_context	vtimer;
 
 	/* Background timer used when the guest is not running */
 	struct hrtimer			timer;
@@ -47,12 +52,6 @@ struct arch_timer_cpu {
 	/* Background timer active */
 	bool				armed;
 
-	/* Timer IRQ */
-	struct kvm_irq_level		irq;
-
-	/* Active IRQ state caching */
-	bool				active_cleared_last;
-
 	/* Is the timer enabled */
 	bool			enabled;
 };
@@ -77,4 +76,6 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
 
 void kvm_timer_init_vhe(void);
+
+#define vcpu_vtimer(v)	(&(v)->arch.timer_cpu.vtimer)
 #endif
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6a084cd..6740efa 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -37,7 +37,7 @@
 
 void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.timer_cpu.active_cleared_last = false;
+	vcpu_vtimer(vcpu)->active_cleared_last = false;
 }
 
 static u64 kvm_phys_timer_read(void)
@@ -102,7 +102,7 @@ static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
 {
 	u64 cval, now;
 
-	cval = vcpu->arch.timer_cpu.cntv_cval;
+	cval = vcpu_vtimer(vcpu)->cnt_cval;
 	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
 
 	if (now < cval) {
@@ -144,21 +144,21 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 
 static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
-	return !(timer->cntv_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
-		(timer->cntv_ctl & ARCH_TIMER_CTRL_ENABLE);
+	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
 }
 
 bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 cval, now;
 
 	if (!kvm_timer_irq_can_fire(vcpu))
 		return false;
 
-	cval = timer->cntv_cval;
+	cval = vtimer->cnt_cval;
 	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
 
 	return cval <= now;
@@ -167,17 +167,17 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
 static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 {
 	int ret;
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	BUG_ON(!vgic_initialized(vcpu->kvm));
 
-	timer->active_cleared_last = false;
-	timer->irq.level = new_level;
-	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
-				   timer->irq.level);
+	vtimer->active_cleared_last = false;
+	vtimer->irq.level = new_level;
+	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
+				   vtimer->irq.level);
 	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
-					 timer->irq.irq,
-					 timer->irq.level);
+					 vtimer->irq.irq,
+					 vtimer->irq.level);
 	WARN_ON(ret);
 }
 
@@ -188,18 +188,19 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	/*
 	 * If userspace modified the timer registers via SET_ONE_REG before
-	 * the vgic was initialized, we mustn't set the timer->irq.level value
+	 * the vgic was initialized, we mustn't set the vtimer->irq.level value
 	 * because the guest would never see the interrupt.  Instead wait
 	 * until we call this function from kvm_timer_flush_hwstate.
 	 */
 	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
 		return -ENODEV;
 
-	if (kvm_timer_should_fire(vcpu) != timer->irq.level)
-		kvm_timer_update_irq(vcpu, !timer->irq.level);
+	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
+		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
 
 	return 0;
 }
@@ -249,7 +250,7 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
  */
 void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	bool phys_active;
 	int ret;
 
@@ -273,8 +274,8 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 	* to ensure that hardware interrupts from the timer triggers a guest
 	* exit.
 	*/
-	phys_active = timer->irq.level ||
-			kvm_vgic_map_is_active(vcpu, timer->irq.irq);
+	phys_active = vtimer->irq.level ||
+			kvm_vgic_map_is_active(vcpu, vtimer->irq.irq);
 
 	/*
 	 * We want to avoid hitting the (re)distributor as much as
@@ -296,7 +297,7 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 	 * - cached value is "active clear"
 	 * - value to be programmed is "active clear"
 	 */
-	if (timer->active_cleared_last && !phys_active)
+	if (vtimer->active_cleared_last && !phys_active)
 		return;
 
 	ret = irq_set_irqchip_state(host_vtimer_irq,
@@ -304,7 +305,7 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 				    phys_active);
 	WARN_ON(ret);
 
-	timer->active_cleared_last = !phys_active;
+	vtimer->active_cleared_last = !phys_active;
 }
 
 /**
@@ -330,7 +331,7 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 			 const struct kvm_irq_level *irq)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	/*
 	 * The vcpu timer irq number cannot be determined in
@@ -338,7 +339,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * kvm_vcpu_set_target(). To handle this, we determine
 	 * vcpu timer irq number when the vcpu is reset.
 	 */
-	timer->irq.irq = irq->irq;
+	vtimer->irq.irq = irq->irq;
 
 	/*
 	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
@@ -346,7 +347,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * resets the timer to be disabled and unmasked and is compliant with
 	 * the ARMv7 architecture.
 	 */
-	timer->cntv_ctl = 0;
+	vtimer->cnt_ctl = 0;
 	kvm_timer_update_state(vcpu);
 
 	return 0;
@@ -368,17 +369,17 @@ static void kvm_timer_init_interrupt(void *info)
 
 int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	switch (regid) {
 	case KVM_REG_ARM_TIMER_CTL:
-		timer->cntv_ctl = value;
+		vtimer->cnt_ctl = value;
 		break;
 	case KVM_REG_ARM_TIMER_CNT:
 		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
 		break;
 	case KVM_REG_ARM_TIMER_CVAL:
-		timer->cntv_cval = value;
+		vtimer->cnt_cval = value;
 		break;
 	default:
 		return -1;
@@ -390,15 +391,15 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
 
 u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	switch (regid) {
 	case KVM_REG_ARM_TIMER_CTL:
-		return timer->cntv_ctl;
+		return vtimer->cnt_ctl;
 	case KVM_REG_ARM_TIMER_CNT:
 		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
 	case KVM_REG_ARM_TIMER_CVAL:
-		return timer->cntv_cval;
+		return vtimer->cnt_cval;
 	}
 	return (u64)-1;
 }
@@ -462,14 +463,16 @@ int kvm_timer_hyp_init(void)
 void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	timer_disarm(timer);
-	kvm_vgic_unmap_phys_irq(vcpu, timer->irq.irq);
+	kvm_vgic_unmap_phys_irq(vcpu, vtimer->irq.irq);
 }
 
 int kvm_timer_enable(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	struct irq_desc *desc;
 	struct irq_data *data;
 	int phys_irq;
@@ -497,7 +500,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
 	 * Tell the VGIC that the virtual interrupt is tied to a
 	 * physical interrupt. We do that once per VCPU.
 	 */
-	ret = kvm_vgic_map_phys_irq(vcpu, timer->irq.irq, phys_irq);
+	ret = kvm_vgic_map_phys_irq(vcpu, vtimer->irq.irq, phys_irq);
 	if (ret)
 		return ret;
 
diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
index 63e28dd..0cf0895 100644
--- a/virt/kvm/arm/hyp/timer-sr.c
+++ b/virt/kvm/arm/hyp/timer-sr.c
@@ -25,11 +25,12 @@
 void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 val;
 
 	if (timer->enabled) {
-		timer->cntv_ctl = read_sysreg_el0(cntv_ctl);
-		timer->cntv_cval = read_sysreg_el0(cntv_cval);
+		vtimer->cnt_ctl = read_sysreg_el0(cntv_ctl);
+		vtimer->cnt_cval = read_sysreg_el0(cntv_cval);
 	}
 
 	/* Disable the virtual timer */
@@ -54,6 +55,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 val;
 
 	/* Those bits are already configured at boot on VHE-system */
@@ -70,8 +72,8 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 
 	if (timer->enabled) {
 		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
-		write_sysreg_el0(timer->cntv_cval, cntv_cval);
+		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
 		isb();
-		write_sysreg_el0(timer->cntv_ctl, cntv_ctl);
+		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
 	}
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 01/10] KVM: arm/arm64: Abstract virtual timer context into separate structure
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Abstract virtual timer context into a separate structure and change all
callers referring to timer registers, irq state and so on. No change in
functionality.

This is about to become very handy when adding the EL1 physical timer.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 include/kvm/arm_arch_timer.h | 27 ++++++++---------
 virt/kvm/arm/arch_timer.c    | 69 +++++++++++++++++++++++---------------------
 virt/kvm/arm/hyp/timer-sr.c  | 10 ++++---
 3 files changed, 56 insertions(+), 50 deletions(-)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 5c970ce..daad3c1 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -28,15 +28,20 @@ struct arch_timer_kvm {
 	u64			cntvoff;
 };
 
-struct arch_timer_cpu {
+struct arch_timer_context {
 	/* Registers: control register, timer value */
-	u32				cntv_ctl;	/* Saved/restored */
-	u64				cntv_cval;	/* Saved/restored */
+	u32				cnt_ctl;
+	u64				cnt_cval;
+
+	/* Timer IRQ */
+	struct kvm_irq_level		irq;
+
+	/* Active IRQ state caching */
+	bool				active_cleared_last;
+};
 
-	/*
-	 * Anything that is not used directly from assembly code goes
-	 * here.
-	 */
+struct arch_timer_cpu {
+	struct arch_timer_context	vtimer;
 
 	/* Background timer used when the guest is not running */
 	struct hrtimer			timer;
@@ -47,12 +52,6 @@ struct arch_timer_cpu {
 	/* Background timer active */
 	bool				armed;
 
-	/* Timer IRQ */
-	struct kvm_irq_level		irq;
-
-	/* Active IRQ state caching */
-	bool				active_cleared_last;
-
 	/* Is the timer enabled */
 	bool			enabled;
 };
@@ -77,4 +76,6 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
 
 void kvm_timer_init_vhe(void);
+
+#define vcpu_vtimer(v)	(&(v)->arch.timer_cpu.vtimer)
 #endif
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6a084cd..6740efa 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -37,7 +37,7 @@
 
 void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.timer_cpu.active_cleared_last = false;
+	vcpu_vtimer(vcpu)->active_cleared_last = false;
 }
 
 static u64 kvm_phys_timer_read(void)
@@ -102,7 +102,7 @@ static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
 {
 	u64 cval, now;
 
-	cval = vcpu->arch.timer_cpu.cntv_cval;
+	cval = vcpu_vtimer(vcpu)->cnt_cval;
 	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
 
 	if (now < cval) {
@@ -144,21 +144,21 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 
 static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
-	return !(timer->cntv_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
-		(timer->cntv_ctl & ARCH_TIMER_CTRL_ENABLE);
+	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
 }
 
 bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 cval, now;
 
 	if (!kvm_timer_irq_can_fire(vcpu))
 		return false;
 
-	cval = timer->cntv_cval;
+	cval = vtimer->cnt_cval;
 	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
 
 	return cval <= now;
@@ -167,17 +167,17 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
 static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 {
 	int ret;
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	BUG_ON(!vgic_initialized(vcpu->kvm));
 
-	timer->active_cleared_last = false;
-	timer->irq.level = new_level;
-	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
-				   timer->irq.level);
+	vtimer->active_cleared_last = false;
+	vtimer->irq.level = new_level;
+	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
+				   vtimer->irq.level);
 	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
-					 timer->irq.irq,
-					 timer->irq.level);
+					 vtimer->irq.irq,
+					 vtimer->irq.level);
 	WARN_ON(ret);
 }
 
@@ -188,18 +188,19 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	/*
 	 * If userspace modified the timer registers via SET_ONE_REG before
-	 * the vgic was initialized, we mustn't set the timer->irq.level value
+	 * the vgic was initialized, we mustn't set the vtimer->irq.level value
 	 * because the guest would never see the interrupt.  Instead wait
 	 * until we call this function from kvm_timer_flush_hwstate.
 	 */
 	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
 		return -ENODEV;
 
-	if (kvm_timer_should_fire(vcpu) != timer->irq.level)
-		kvm_timer_update_irq(vcpu, !timer->irq.level);
+	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
+		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
 
 	return 0;
 }
@@ -249,7 +250,7 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
  */
 void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	bool phys_active;
 	int ret;
 
@@ -273,8 +274,8 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 	* to ensure that hardware interrupts from the timer triggers a guest
 	* exit.
 	*/
-	phys_active = timer->irq.level ||
-			kvm_vgic_map_is_active(vcpu, timer->irq.irq);
+	phys_active = vtimer->irq.level ||
+			kvm_vgic_map_is_active(vcpu, vtimer->irq.irq);
 
 	/*
 	 * We want to avoid hitting the (re)distributor as much as
@@ -296,7 +297,7 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 	 * - cached value is "active clear"
 	 * - value to be programmed is "active clear"
 	 */
-	if (timer->active_cleared_last && !phys_active)
+	if (vtimer->active_cleared_last && !phys_active)
 		return;
 
 	ret = irq_set_irqchip_state(host_vtimer_irq,
@@ -304,7 +305,7 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 				    phys_active);
 	WARN_ON(ret);
 
-	timer->active_cleared_last = !phys_active;
+	vtimer->active_cleared_last = !phys_active;
 }
 
 /**
@@ -330,7 +331,7 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 			 const struct kvm_irq_level *irq)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	/*
 	 * The vcpu timer irq number cannot be determined in
@@ -338,7 +339,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * kvm_vcpu_set_target(). To handle this, we determine
 	 * vcpu timer irq number when the vcpu is reset.
 	 */
-	timer->irq.irq = irq->irq;
+	vtimer->irq.irq = irq->irq;
 
 	/*
 	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
@@ -346,7 +347,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * resets the timer to be disabled and unmasked and is compliant with
 	 * the ARMv7 architecture.
 	 */
-	timer->cntv_ctl = 0;
+	vtimer->cnt_ctl = 0;
 	kvm_timer_update_state(vcpu);
 
 	return 0;
@@ -368,17 +369,17 @@ static void kvm_timer_init_interrupt(void *info)
 
 int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	switch (regid) {
 	case KVM_REG_ARM_TIMER_CTL:
-		timer->cntv_ctl = value;
+		vtimer->cnt_ctl = value;
 		break;
 	case KVM_REG_ARM_TIMER_CNT:
 		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
 		break;
 	case KVM_REG_ARM_TIMER_CVAL:
-		timer->cntv_cval = value;
+		vtimer->cnt_cval = value;
 		break;
 	default:
 		return -1;
@@ -390,15 +391,15 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
 
 u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
 {
-	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	switch (regid) {
 	case KVM_REG_ARM_TIMER_CTL:
-		return timer->cntv_ctl;
+		return vtimer->cnt_ctl;
 	case KVM_REG_ARM_TIMER_CNT:
 		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
 	case KVM_REG_ARM_TIMER_CVAL:
-		return timer->cntv_cval;
+		return vtimer->cnt_cval;
 	}
 	return (u64)-1;
 }
@@ -462,14 +463,16 @@ int kvm_timer_hyp_init(void)
 void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	timer_disarm(timer);
-	kvm_vgic_unmap_phys_irq(vcpu, timer->irq.irq);
+	kvm_vgic_unmap_phys_irq(vcpu, vtimer->irq.irq);
 }
 
 int kvm_timer_enable(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	struct irq_desc *desc;
 	struct irq_data *data;
 	int phys_irq;
@@ -497,7 +500,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
 	 * Tell the VGIC that the virtual interrupt is tied to a
 	 * physical interrupt. We do that once per VCPU.
 	 */
-	ret = kvm_vgic_map_phys_irq(vcpu, timer->irq.irq, phys_irq);
+	ret = kvm_vgic_map_phys_irq(vcpu, vtimer->irq.irq, phys_irq);
 	if (ret)
 		return ret;
 
diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
index 63e28dd..0cf0895 100644
--- a/virt/kvm/arm/hyp/timer-sr.c
+++ b/virt/kvm/arm/hyp/timer-sr.c
@@ -25,11 +25,12 @@
 void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 val;
 
 	if (timer->enabled) {
-		timer->cntv_ctl = read_sysreg_el0(cntv_ctl);
-		timer->cntv_cval = read_sysreg_el0(cntv_cval);
+		vtimer->cnt_ctl = read_sysreg_el0(cntv_ctl);
+		vtimer->cnt_cval = read_sysreg_el0(cntv_cval);
 	}
 
 	/* Disable the virtual timer */
@@ -54,6 +55,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 val;
 
 	/* Those bits are already configured at boot on VHE-system */
@@ -70,8 +72,8 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 
 	if (timer->enabled) {
 		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
-		write_sysreg_el0(timer->cntv_cval, cntv_cval);
+		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
 		isb();
-		write_sysreg_el0(timer->cntv_ctl, cntv_ctl);
+		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
 	}
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Make cntvoff per each timer context. This is helpful to abstract kvm
timer functions to work with timer context without considering timer
types (e.g. physical timer or virtual timer).

This also would pave the way for ever doing adjustments of the cntvoff
on a per-CPU basis if that should ever make sense.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/include/asm/kvm_host.h   |  6 +++---
 arch/arm64/include/asm/kvm_host.h |  4 ++--
 include/kvm/arm_arch_timer.h      |  8 +++-----
 virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
 virt/kvm/arm/hyp/timer-sr.c       |  3 +--
 5 files changed, 29 insertions(+), 18 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index d5423ab..f5456a9 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -60,9 +60,6 @@ struct kvm_arch {
 	/* The last vcpu id that ran on each physical CPU */
 	int __percpu *last_vcpu_ran;
 
-	/* Timer */
-	struct arch_timer_kvm	timer;
-
 	/*
 	 * Anything that is not used directly from assembly code goes
 	 * here.
@@ -75,6 +72,9 @@ struct kvm_arch {
 	/* Stage-2 page table */
 	pgd_t *pgd;
 
+	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
+	spinlock_t cntvoff_lock;
+
 	/* Interrupt controller */
 	struct vgic_dist	vgic;
 	int max_vcpus;
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e505038..23749a8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -71,8 +71,8 @@ struct kvm_arch {
 	/* Interrupt controller */
 	struct vgic_dist	vgic;
 
-	/* Timer */
-	struct arch_timer_kvm	timer;
+	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
+	spinlock_t cntvoff_lock;
 };
 
 #define KVM_NR_MEM_OBJS     40
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index daad3c1..1b9c988 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -23,11 +23,6 @@
 #include <linux/hrtimer.h>
 #include <linux/workqueue.h>
 
-struct arch_timer_kvm {
-	/* Virtual offset */
-	u64			cntvoff;
-};
-
 struct arch_timer_context {
 	/* Registers: control register, timer value */
 	u32				cnt_ctl;
@@ -38,6 +33,9 @@ struct arch_timer_context {
 
 	/* Active IRQ state caching */
 	bool				active_cleared_last;
+
+	/* Virtual offset */
+	u64			cntvoff;
 };
 
 struct arch_timer_cpu {
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6740efa..fa4c042 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
 static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
 {
 	u64 cval, now;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
-	cval = vcpu_vtimer(vcpu)->cnt_cval;
-	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+	cval = vtimer->cnt_cval;
+	now = kvm_phys_timer_read() - vtimer->cntvoff;
 
 	if (now < cval) {
 		u64 ns;
@@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
 		return false;
 
 	cval = vtimer->cnt_cval;
-	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+	now = kvm_phys_timer_read() - vtimer->cntvoff;
 
 	return cval <= now;
 }
@@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+/* Make the updates of cntvoff for all vtimer contexts atomic */
+static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
+{
+	int i;
+
+	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
+	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
+		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
+	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
+}
+
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
+	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
+
 	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
 	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
 	timer->timer.function = kvm_timer_expire;
@@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
 		vtimer->cnt_ctl = value;
 		break;
 	case KVM_REG_ARM_TIMER_CNT:
-		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
+		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
 		break;
 	case KVM_REG_ARM_TIMER_CVAL:
 		vtimer->cnt_cval = value;
@@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
 	case KVM_REG_ARM_TIMER_CTL:
 		return vtimer->cnt_ctl;
 	case KVM_REG_ARM_TIMER_CNT:
-		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+		return kvm_phys_timer_read() - vtimer->cntvoff;
 	case KVM_REG_ARM_TIMER_CVAL:
 		return vtimer->cnt_cval;
 	}
@@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
 
 void kvm_timer_init(struct kvm *kvm)
 {
-	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
+	spin_lock_init(&kvm->arch.cntvoff_lock);
 }
 
 /*
diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
index 0cf0895..4734915 100644
--- a/virt/kvm/arm/hyp/timer-sr.c
+++ b/virt/kvm/arm/hyp/timer-sr.c
@@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 
 void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 val;
@@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 	}
 
 	if (timer->enabled) {
-		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
+		write_sysreg(vtimer->cntvoff, cntvoff_el2);
 		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
 		isb();
 		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Make cntvoff per each timer context. This is helpful to abstract kvm
timer functions to work with timer context without considering timer
types (e.g. physical timer or virtual timer).

This also would pave the way for ever doing adjustments of the cntvoff
on a per-CPU basis if that should ever make sense.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/include/asm/kvm_host.h   |  6 +++---
 arch/arm64/include/asm/kvm_host.h |  4 ++--
 include/kvm/arm_arch_timer.h      |  8 +++-----
 virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
 virt/kvm/arm/hyp/timer-sr.c       |  3 +--
 5 files changed, 29 insertions(+), 18 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index d5423ab..f5456a9 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -60,9 +60,6 @@ struct kvm_arch {
 	/* The last vcpu id that ran on each physical CPU */
 	int __percpu *last_vcpu_ran;
 
-	/* Timer */
-	struct arch_timer_kvm	timer;
-
 	/*
 	 * Anything that is not used directly from assembly code goes
 	 * here.
@@ -75,6 +72,9 @@ struct kvm_arch {
 	/* Stage-2 page table */
 	pgd_t *pgd;
 
+	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
+	spinlock_t cntvoff_lock;
+
 	/* Interrupt controller */
 	struct vgic_dist	vgic;
 	int max_vcpus;
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e505038..23749a8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -71,8 +71,8 @@ struct kvm_arch {
 	/* Interrupt controller */
 	struct vgic_dist	vgic;
 
-	/* Timer */
-	struct arch_timer_kvm	timer;
+	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
+	spinlock_t cntvoff_lock;
 };
 
 #define KVM_NR_MEM_OBJS     40
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index daad3c1..1b9c988 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -23,11 +23,6 @@
 #include <linux/hrtimer.h>
 #include <linux/workqueue.h>
 
-struct arch_timer_kvm {
-	/* Virtual offset */
-	u64			cntvoff;
-};
-
 struct arch_timer_context {
 	/* Registers: control register, timer value */
 	u32				cnt_ctl;
@@ -38,6 +33,9 @@ struct arch_timer_context {
 
 	/* Active IRQ state caching */
 	bool				active_cleared_last;
+
+	/* Virtual offset */
+	u64			cntvoff;
 };
 
 struct arch_timer_cpu {
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6740efa..fa4c042 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
 static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
 {
 	u64 cval, now;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
-	cval = vcpu_vtimer(vcpu)->cnt_cval;
-	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+	cval = vtimer->cnt_cval;
+	now = kvm_phys_timer_read() - vtimer->cntvoff;
 
 	if (now < cval) {
 		u64 ns;
@@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
 		return false;
 
 	cval = vtimer->cnt_cval;
-	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+	now = kvm_phys_timer_read() - vtimer->cntvoff;
 
 	return cval <= now;
 }
@@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+/* Make the updates of cntvoff for all vtimer contexts atomic */
+static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
+{
+	int i;
+
+	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
+	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
+		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
+	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
+}
+
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
+	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
+
 	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
 	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
 	timer->timer.function = kvm_timer_expire;
@@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
 		vtimer->cnt_ctl = value;
 		break;
 	case KVM_REG_ARM_TIMER_CNT:
-		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
+		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
 		break;
 	case KVM_REG_ARM_TIMER_CVAL:
 		vtimer->cnt_cval = value;
@@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
 	case KVM_REG_ARM_TIMER_CTL:
 		return vtimer->cnt_ctl;
 	case KVM_REG_ARM_TIMER_CNT:
-		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+		return kvm_phys_timer_read() - vtimer->cntvoff;
 	case KVM_REG_ARM_TIMER_CVAL:
 		return vtimer->cnt_cval;
 	}
@@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
 
 void kvm_timer_init(struct kvm *kvm)
 {
-	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
+	spin_lock_init(&kvm->arch.cntvoff_lock);
 }
 
 /*
diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
index 0cf0895..4734915 100644
--- a/virt/kvm/arm/hyp/timer-sr.c
+++ b/virt/kvm/arm/hyp/timer-sr.c
@@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 
 void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 {
-	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 val;
@@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 	}
 
 	if (timer->enabled) {
-		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
+		write_sysreg(vtimer->cntvoff, cntvoff_el2);
 		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
 		isb();
 		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Now that we have a separate structure for timer context, make functions
general so that they can work with any timer context, not just the
virtual timer context.  This does not change the virtual timer
functionality.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/kvm/arm.c           |  2 +-
 include/kvm/arm_arch_timer.h |  3 ++-
 virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
 3 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9d74464..9a34a3c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 {
-	return kvm_timer_should_fire(vcpu);
+	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
 }
 
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 1b9c988..d921d20 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
 int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
 
-bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
+bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
+			   struct arch_timer_context *timer_ctx);
 void kvm_timer_schedule(struct kvm_vcpu *vcpu);
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
 
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index fa4c042..f72005a 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
 	kvm_vcpu_kick(vcpu);
 }
 
-static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
+static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
+				   struct arch_timer_context *timer_ctx)
 {
 	u64 cval, now;
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
-	cval = vtimer->cnt_cval;
-	now = kvm_phys_timer_read() - vtimer->cntvoff;
+	cval = timer_ctx->cnt_cval;
+	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
 
 	if (now < cval) {
 		u64 ns;
@@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	 * PoV (NTP on the host may have forced it to expire
 	 * early). If we should have slept longer, restart it.
 	 */
-	ns = kvm_timer_compute_delta(vcpu);
+	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
 	if (unlikely(ns)) {
 		hrtimer_forward_now(hrt, ns_to_ktime(ns));
 		return HRTIMER_RESTART;
@@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	return HRTIMER_NORESTART;
 }
 
-static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
+static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
 {
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
-
-	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
-		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
+	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
 }
 
-bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
+bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
+			   struct arch_timer_context *timer_ctx)
 {
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 cval, now;
 
-	if (!kvm_timer_irq_can_fire(vcpu))
+	if (!kvm_timer_irq_can_fire(timer_ctx))
 		return false;
 
-	cval = vtimer->cnt_cval;
-	now = kvm_phys_timer_read() - vtimer->cntvoff;
+	cval = timer_ctx->cnt_cval;
+	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
 
 	return cval <= now;
 }
 
-static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
+static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
+					struct arch_timer_context *timer_ctx)
 {
 	int ret;
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	BUG_ON(!vgic_initialized(vcpu->kvm));
 
-	vtimer->active_cleared_last = false;
-	vtimer->irq.level = new_level;
-	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
-				   vtimer->irq.level);
+	timer_ctx->active_cleared_last = false;
+	timer_ctx->irq.level = new_level;
+	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
+				   timer_ctx->irq.level);
 	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
-					 vtimer->irq.irq,
-					 vtimer->irq.level);
+					 timer_ctx->irq.irq,
+					 timer_ctx->irq.level);
 	WARN_ON(ret);
 }
 
@@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
 		return -ENODEV;
 
-	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
-		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
+	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
+		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
 
 	return 0;
 }
@@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 void kvm_timer_schedule(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	BUG_ON(timer_is_armed(timer));
 
@@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
 	 * already expired, because kvm_vcpu_block will return before putting
 	 * the thread to sleep.
 	 */
-	if (kvm_timer_should_fire(vcpu))
+	if (kvm_timer_should_fire(vcpu, vtimer))
 		return;
 
 	/*
 	 * If the timer is not capable of raising interrupts (disabled or
 	 * masked), then there's no more work for us to do.
 	 */
-	if (!kvm_timer_irq_can_fire(vcpu))
+	if (!kvm_timer_irq_can_fire(vtimer))
 		return;
 
 	/*  The timer has not yet expired, schedule a background timer */
-	timer_arm(timer, kvm_timer_compute_delta(vcpu));
+	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
 }
 
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have a separate structure for timer context, make functions
general so that they can work with any timer context, not just the
virtual timer context.  This does not change the virtual timer
functionality.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/kvm/arm.c           |  2 +-
 include/kvm/arm_arch_timer.h |  3 ++-
 virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
 3 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9d74464..9a34a3c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 {
-	return kvm_timer_should_fire(vcpu);
+	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
 }
 
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 1b9c988..d921d20 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
 int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
 
-bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
+bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
+			   struct arch_timer_context *timer_ctx);
 void kvm_timer_schedule(struct kvm_vcpu *vcpu);
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
 
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index fa4c042..f72005a 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
 	kvm_vcpu_kick(vcpu);
 }
 
-static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
+static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
+				   struct arch_timer_context *timer_ctx)
 {
 	u64 cval, now;
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
-	cval = vtimer->cnt_cval;
-	now = kvm_phys_timer_read() - vtimer->cntvoff;
+	cval = timer_ctx->cnt_cval;
+	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
 
 	if (now < cval) {
 		u64 ns;
@@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	 * PoV (NTP on the host may have forced it to expire
 	 * early). If we should have slept longer, restart it.
 	 */
-	ns = kvm_timer_compute_delta(vcpu);
+	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
 	if (unlikely(ns)) {
 		hrtimer_forward_now(hrt, ns_to_ktime(ns));
 		return HRTIMER_RESTART;
@@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	return HRTIMER_NORESTART;
 }
 
-static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
+static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
 {
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
-
-	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
-		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
+	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
 }
 
-bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
+bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
+			   struct arch_timer_context *timer_ctx)
 {
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 	u64 cval, now;
 
-	if (!kvm_timer_irq_can_fire(vcpu))
+	if (!kvm_timer_irq_can_fire(timer_ctx))
 		return false;
 
-	cval = vtimer->cnt_cval;
-	now = kvm_phys_timer_read() - vtimer->cntvoff;
+	cval = timer_ctx->cnt_cval;
+	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
 
 	return cval <= now;
 }
 
-static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
+static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
+					struct arch_timer_context *timer_ctx)
 {
 	int ret;
-	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	BUG_ON(!vgic_initialized(vcpu->kvm));
 
-	vtimer->active_cleared_last = false;
-	vtimer->irq.level = new_level;
-	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
-				   vtimer->irq.level);
+	timer_ctx->active_cleared_last = false;
+	timer_ctx->irq.level = new_level;
+	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
+				   timer_ctx->irq.level);
 	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
-					 vtimer->irq.irq,
-					 vtimer->irq.level);
+					 timer_ctx->irq.irq,
+					 timer_ctx->irq.level);
 	WARN_ON(ret);
 }
 
@@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
 		return -ENODEV;
 
-	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
-		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
+	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
+		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
 
 	return 0;
 }
@@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 void kvm_timer_schedule(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
 
 	BUG_ON(timer_is_armed(timer));
 
@@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
 	 * already expired, because kvm_vcpu_block will return before putting
 	 * the thread to sleep.
 	 */
-	if (kvm_timer_should_fire(vcpu))
+	if (kvm_timer_should_fire(vcpu, vtimer))
 		return;
 
 	/*
 	 * If the timer is not capable of raising interrupts (disabled or
 	 * masked), then there's no more work for us to do.
 	 */
-	if (!kvm_timer_irq_can_fire(vcpu))
+	if (!kvm_timer_irq_can_fire(vtimer))
 		return;
 
 	/*  The timer has not yet expired, schedule a background timer */
-	timer_arm(timer, kvm_timer_compute_delta(vcpu));
+	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
 }
 
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 04/10] KVM: arm/arm64: Add the EL1 physical timer context
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Add the EL1 physical timer context.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 include/kvm/arm_arch_timer.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index d921d20..69f648b 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -40,6 +40,7 @@ struct arch_timer_context {
 
 struct arch_timer_cpu {
 	struct arch_timer_context	vtimer;
+	struct arch_timer_context	ptimer;
 
 	/* Background timer used when the guest is not running */
 	struct hrtimer			timer;
@@ -77,4 +78,5 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
 void kvm_timer_init_vhe(void);
 
 #define vcpu_vtimer(v)	(&(v)->arch.timer_cpu.vtimer)
+#define vcpu_ptimer(v)	(&(v)->arch.timer_cpu.ptimer)
 #endif
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 04/10] KVM: arm/arm64: Add the EL1 physical timer context
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Add the EL1 physical timer context.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 include/kvm/arm_arch_timer.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index d921d20..69f648b 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -40,6 +40,7 @@ struct arch_timer_context {
 
 struct arch_timer_cpu {
 	struct arch_timer_context	vtimer;
+	struct arch_timer_context	ptimer;
 
 	/* Background timer used when the guest is not running */
 	struct hrtimer			timer;
@@ -77,4 +78,5 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
 void kvm_timer_init_vhe(void);
 
 #define vcpu_vtimer(v)	(&(v)->arch.timer_cpu.vtimer)
+#define vcpu_ptimer(v)	(&(v)->arch.timer_cpu.ptimer)
 #endif
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Initialize the emulated EL1 physical timer with the default irq number.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/kvm/reset.c         | 9 ++++++++-
 arch/arm64/kvm/reset.c       | 9 ++++++++-
 include/kvm/arm_arch_timer.h | 3 ++-
 virt/kvm/arm/arch_timer.c    | 9 +++++++--
 4 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
index 4b5e802..1da8b2d 100644
--- a/arch/arm/kvm/reset.c
+++ b/arch/arm/kvm/reset.c
@@ -37,6 +37,11 @@
 	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
 };
 
+static const struct kvm_irq_level cortexa_ptimer_irq = {
+	{ .irq = 30 },
+	.level = 1,
+};
+
 static const struct kvm_irq_level cortexa_vtimer_irq = {
 	{ .irq = 27 },
 	.level = 1,
@@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct kvm_regs *reset_regs;
 	const struct kvm_irq_level *cpu_vtimer_irq;
+	const struct kvm_irq_level *cpu_ptimer_irq;
 
 	switch (vcpu->arch.target) {
 	case KVM_ARM_TARGET_CORTEX_A7:
@@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		reset_regs = &cortexa_regs_reset;
 		vcpu->arch.midr = read_cpuid_id();
 		cpu_vtimer_irq = &cortexa_vtimer_irq;
+		cpu_ptimer_irq = &cortexa_ptimer_irq;
 		break;
 	default:
 		return -ENODEV;
@@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	kvm_reset_coprocs(vcpu);
 
 	/* Reset arch_timer context */
-	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
+	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
 }
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index e95d4f6..d9e9697 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -46,6 +46,11 @@
 			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
 };
 
+static const struct kvm_irq_level default_ptimer_irq = {
+	.irq	= 30,
+	.level	= 1,
+};
+
 static const struct kvm_irq_level default_vtimer_irq = {
 	.irq	= 27,
 	.level	= 1,
@@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	const struct kvm_irq_level *cpu_vtimer_irq;
+	const struct kvm_irq_level *cpu_ptimer_irq;
 	const struct kvm_regs *cpu_reset;
 
 	switch (vcpu->arch.target) {
@@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		}
 
 		cpu_vtimer_irq = &default_vtimer_irq;
+		cpu_ptimer_irq = &default_ptimer_irq;
 		break;
 	}
 
@@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	kvm_pmu_vcpu_reset(vcpu);
 
 	/* Reset timer */
-	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
+	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
 }
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 69f648b..a364593 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -59,7 +59,8 @@ struct arch_timer_cpu {
 int kvm_timer_enable(struct kvm_vcpu *vcpu);
 void kvm_timer_init(struct kvm *kvm);
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
-			 const struct kvm_irq_level *irq);
+			 const struct kvm_irq_level *virt_irq,
+			 const struct kvm_irq_level *phys_irq);
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
 void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index f72005a..0f6e935 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
 }
 
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
-			 const struct kvm_irq_level *irq)
+			 const struct kvm_irq_level *virt_irq,
+			 const struct kvm_irq_level *phys_irq)
 {
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
 
 	/*
 	 * The vcpu timer irq number cannot be determined in
@@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * kvm_vcpu_set_target(). To handle this, we determine
 	 * vcpu timer irq number when the vcpu is reset.
 	 */
-	vtimer->irq.irq = irq->irq;
+	vtimer->irq.irq = virt_irq->irq;
+	ptimer->irq.irq = phys_irq->irq;
 
 	/*
 	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
@@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * the ARMv7 architecture.
 	 */
 	vtimer->cnt_ctl = 0;
+	ptimer->cnt_ctl = 0;
 	kvm_timer_update_state(vcpu);
 
 	return 0;
@@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
 	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
+	vcpu_ptimer(vcpu)->cntvoff = 0;
 
 	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
 	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Initialize the emulated EL1 physical timer with the default irq number.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/kvm/reset.c         | 9 ++++++++-
 arch/arm64/kvm/reset.c       | 9 ++++++++-
 include/kvm/arm_arch_timer.h | 3 ++-
 virt/kvm/arm/arch_timer.c    | 9 +++++++--
 4 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
index 4b5e802..1da8b2d 100644
--- a/arch/arm/kvm/reset.c
+++ b/arch/arm/kvm/reset.c
@@ -37,6 +37,11 @@
 	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
 };
 
+static const struct kvm_irq_level cortexa_ptimer_irq = {
+	{ .irq = 30 },
+	.level = 1,
+};
+
 static const struct kvm_irq_level cortexa_vtimer_irq = {
 	{ .irq = 27 },
 	.level = 1,
@@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct kvm_regs *reset_regs;
 	const struct kvm_irq_level *cpu_vtimer_irq;
+	const struct kvm_irq_level *cpu_ptimer_irq;
 
 	switch (vcpu->arch.target) {
 	case KVM_ARM_TARGET_CORTEX_A7:
@@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		reset_regs = &cortexa_regs_reset;
 		vcpu->arch.midr = read_cpuid_id();
 		cpu_vtimer_irq = &cortexa_vtimer_irq;
+		cpu_ptimer_irq = &cortexa_ptimer_irq;
 		break;
 	default:
 		return -ENODEV;
@@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	kvm_reset_coprocs(vcpu);
 
 	/* Reset arch_timer context */
-	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
+	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
 }
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index e95d4f6..d9e9697 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -46,6 +46,11 @@
 			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
 };
 
+static const struct kvm_irq_level default_ptimer_irq = {
+	.irq	= 30,
+	.level	= 1,
+};
+
 static const struct kvm_irq_level default_vtimer_irq = {
 	.irq	= 27,
 	.level	= 1,
@@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	const struct kvm_irq_level *cpu_vtimer_irq;
+	const struct kvm_irq_level *cpu_ptimer_irq;
 	const struct kvm_regs *cpu_reset;
 
 	switch (vcpu->arch.target) {
@@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		}
 
 		cpu_vtimer_irq = &default_vtimer_irq;
+		cpu_ptimer_irq = &default_ptimer_irq;
 		break;
 	}
 
@@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	kvm_pmu_vcpu_reset(vcpu);
 
 	/* Reset timer */
-	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
+	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
 }
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 69f648b..a364593 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -59,7 +59,8 @@ struct arch_timer_cpu {
 int kvm_timer_enable(struct kvm_vcpu *vcpu);
 void kvm_timer_init(struct kvm *kvm);
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
-			 const struct kvm_irq_level *irq);
+			 const struct kvm_irq_level *virt_irq,
+			 const struct kvm_irq_level *phys_irq);
 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
 void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index f72005a..0f6e935 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
 }
 
 int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
-			 const struct kvm_irq_level *irq)
+			 const struct kvm_irq_level *virt_irq,
+			 const struct kvm_irq_level *phys_irq)
 {
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
 
 	/*
 	 * The vcpu timer irq number cannot be determined in
@@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * kvm_vcpu_set_target(). To handle this, we determine
 	 * vcpu timer irq number when the vcpu is reset.
 	 */
-	vtimer->irq.irq = irq->irq;
+	vtimer->irq.irq = virt_irq->irq;
+	ptimer->irq.irq = phys_irq->irq;
 
 	/*
 	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
@@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
 	 * the ARMv7 architecture.
 	 */
 	vtimer->cnt_ctl = 0;
+	ptimer->cnt_ctl = 0;
 	kvm_timer_update_state(vcpu);
 
 	return 0;
@@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
 	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
+	vcpu_ptimer(vcpu)->cntvoff = 0;
 
 	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
 	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Now that we maintain the EL1 physical timer register states of VMs,
update the physical timer interrupt level along with the virtual one.

Note that the emulated EL1 physical timer is not mapped to any hardware
timer, so we call a proper vgic function.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 0f6e935..3b6bd50 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
 	WARN_ON(ret);
 }
 
+static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
+				 struct arch_timer_context *timer)
+{
+	int ret;
+
+	BUG_ON(!vgic_initialized(vcpu->kvm));
+
+	timer->irq.level = new_level;
+	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
+				   timer->irq.level);
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer->irq.irq,
+				  timer->irq.level);
+	WARN_ON(ret);
+}
+
 /*
  * Check if there was a change in the timer state (should we raise or lower
  * the line level to the GIC).
@@ -188,6 +203,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
 
 	/*
 	 * If userspace modified the timer registers via SET_ONE_REG before
@@ -201,6 +217,10 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
 		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
 
+	/* The emulated EL1 physical timer irq is not mapped to hardware */
+	if (kvm_timer_should_fire(vcpu, ptimer) != ptimer->irq.level)
+		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we maintain the EL1 physical timer register states of VMs,
update the physical timer interrupt level along with the virtual one.

Note that the emulated EL1 physical timer is not mapped to any hardware
timer, so we call a proper vgic function.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 0f6e935..3b6bd50 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
 	WARN_ON(ret);
 }
 
+static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
+				 struct arch_timer_context *timer)
+{
+	int ret;
+
+	BUG_ON(!vgic_initialized(vcpu->kvm));
+
+	timer->irq.level = new_level;
+	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
+				   timer->irq.level);
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer->irq.irq,
+				  timer->irq.level);
+	WARN_ON(ret);
+}
+
 /*
  * Check if there was a change in the timer state (should we raise or lower
  * the line level to the GIC).
@@ -188,6 +203,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
 
 	/*
 	 * If userspace modified the timer registers via SET_ONE_REG before
@@ -201,6 +217,10 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
 		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
 
+	/* The emulated EL1 physical timer irq is not mapped to hardware */
+	if (kvm_timer_should_fire(vcpu, ptimer) != ptimer->irq.level)
+		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 07/10] KVM: arm/arm64: Set a background timer to the earliest timer expiration
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

When scheduling a background timer, consider both of the virtual and
physical timer and pick the earliest expiration time.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/kvm/arm.c        |  3 ++-
 virt/kvm/arm/arch_timer.c | 55 ++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9a34a3c..9e94f87 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -301,7 +301,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 {
-	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
+	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)) ||
+		kvm_timer_should_fire(vcpu, vcpu_ptimer(vcpu));
 }
 
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 3b6bd50..d3925e2 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -119,6 +119,35 @@ static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
+{
+	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
+}
+
+/*
+ * Returns the earliest expiration time in ns among guest timers.
+ * Note that it will return 0 if none of timers can fire.
+ */
+static u64 kvm_timer_earliest_exp(struct kvm_vcpu *vcpu)
+{
+	u64 min_virt = ULLONG_MAX, min_phys = ULLONG_MAX;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+	if (kvm_timer_irq_can_fire(vtimer))
+		min_virt = kvm_timer_compute_delta(vcpu, vtimer);
+
+	if (kvm_timer_irq_can_fire(ptimer))
+		min_phys = kvm_timer_compute_delta(vcpu, ptimer);
+
+	/* If none of timers can fire, then return 0 */
+	if ((min_virt == ULLONG_MAX) && (min_phys == ULLONG_MAX))
+		return 0;
+
+	return min(min_virt, min_phys);
+}
+
 static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 {
 	struct arch_timer_cpu *timer;
@@ -133,7 +162,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	 * PoV (NTP on the host may have forced it to expire
 	 * early). If we should have slept longer, restart it.
 	 */
-	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
+	ns = kvm_timer_earliest_exp(vcpu);
 	if (unlikely(ns)) {
 		hrtimer_forward_now(hrt, ns_to_ktime(ns));
 		return HRTIMER_RESTART;
@@ -143,12 +172,6 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	return HRTIMER_NORESTART;
 }
 
-static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
-{
-	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
-		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
-}
-
 bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
 			   struct arch_timer_context *timer_ctx)
 {
@@ -233,26 +256,32 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
 
 	BUG_ON(timer_is_armed(timer));
 
 	/*
-	 * No need to schedule a background timer if the guest timer has
+	 * No need to schedule a background timer if any guest timer has
 	 * already expired, because kvm_vcpu_block will return before putting
 	 * the thread to sleep.
 	 */
-	if (kvm_timer_should_fire(vcpu, vtimer))
+	if (kvm_timer_should_fire(vcpu, vtimer) ||
+	    kvm_timer_should_fire(vcpu, ptimer))
 		return;
 
 	/*
-	 * If the timer is not capable of raising interrupts (disabled or
+	 * If both timers are not capable of raising interrupts (disabled or
 	 * masked), then there's no more work for us to do.
 	 */
-	if (!kvm_timer_irq_can_fire(vtimer))
+	if (!kvm_timer_irq_can_fire(vtimer) &&
+	    !kvm_timer_irq_can_fire(ptimer))
 		return;
 
-	/*  The timer has not yet expired, schedule a background timer */
-	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
+	/*
+	 * The guest timers have not yet expired, schedule a background timer.
+	 * Set the earliest expiration time among the guest timers.
+	 */
+	timer_arm(timer, kvm_timer_earliest_exp(vcpu));
 }
 
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 07/10] KVM: arm/arm64: Set a background timer to the earliest timer expiration
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

When scheduling a background timer, consider both of the virtual and
physical timer and pick the earliest expiration time.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm/kvm/arm.c        |  3 ++-
 virt/kvm/arm/arch_timer.c | 55 ++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9a34a3c..9e94f87 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -301,7 +301,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 {
-	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
+	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)) ||
+		kvm_timer_should_fire(vcpu, vcpu_ptimer(vcpu));
 }
 
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 3b6bd50..d3925e2 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -119,6 +119,35 @@ static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
+{
+	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
+}
+
+/*
+ * Returns the earliest expiration time in ns among guest timers.
+ * Note that it will return 0 if none of timers can fire.
+ */
+static u64 kvm_timer_earliest_exp(struct kvm_vcpu *vcpu)
+{
+	u64 min_virt = ULLONG_MAX, min_phys = ULLONG_MAX;
+	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+	if (kvm_timer_irq_can_fire(vtimer))
+		min_virt = kvm_timer_compute_delta(vcpu, vtimer);
+
+	if (kvm_timer_irq_can_fire(ptimer))
+		min_phys = kvm_timer_compute_delta(vcpu, ptimer);
+
+	/* If none of timers can fire, then return 0 */
+	if ((min_virt == ULLONG_MAX) && (min_phys == ULLONG_MAX))
+		return 0;
+
+	return min(min_virt, min_phys);
+}
+
 static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 {
 	struct arch_timer_cpu *timer;
@@ -133,7 +162,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	 * PoV (NTP on the host may have forced it to expire
 	 * early). If we should have slept longer, restart it.
 	 */
-	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
+	ns = kvm_timer_earliest_exp(vcpu);
 	if (unlikely(ns)) {
 		hrtimer_forward_now(hrt, ns_to_ktime(ns));
 		return HRTIMER_RESTART;
@@ -143,12 +172,6 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
 	return HRTIMER_NORESTART;
 }
 
-static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
-{
-	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
-		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
-}
-
 bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
 			   struct arch_timer_context *timer_ctx)
 {
@@ -233,26 +256,32 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
 
 	BUG_ON(timer_is_armed(timer));
 
 	/*
-	 * No need to schedule a background timer if the guest timer has
+	 * No need to schedule a background timer if any guest timer has
 	 * already expired, because kvm_vcpu_block will return before putting
 	 * the thread to sleep.
 	 */
-	if (kvm_timer_should_fire(vcpu, vtimer))
+	if (kvm_timer_should_fire(vcpu, vtimer) ||
+	    kvm_timer_should_fire(vcpu, ptimer))
 		return;
 
 	/*
-	 * If the timer is not capable of raising interrupts (disabled or
+	 * If both timers are not capable of raising interrupts (disabled or
 	 * masked), then there's no more work for us to do.
 	 */
-	if (!kvm_timer_irq_can_fire(vtimer))
+	if (!kvm_timer_irq_can_fire(vtimer) &&
+	    !kvm_timer_irq_can_fire(ptimer))
 		return;
 
-	/*  The timer has not yet expired, schedule a background timer */
-	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
+	/*
+	 * The guest timers have not yet expired, schedule a background timer.
+	 * Set the earliest expiration time among the guest timers.
+	 */
+	timer_arm(timer, kvm_timer_earliest_exp(vcpu));
 }
 
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 08/10] KVM: arm/arm64: Set up a background timer for the physical timer emulation
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Set a background timer for the EL1 physical timer emulation while VMs
are running, so that VMs get the physical timer interrupts in a timely
manner.

Schedule the background timer on entry to the VM and cancel it on exit.
This would not have any performance impact to the guest OSes that
currently use the virtual timer since the physical timer is always not
enabled.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 virt/kvm/arm/arch_timer.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index d3925e2..b366bb2 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -247,6 +247,23 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+/* Schedule the background timer for the emulated timer. */
+static void kvm_timer_emulate(struct kvm_vcpu *vcpu,
+			      struct arch_timer_context *timer_ctx)
+{
+	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+
+	if (kvm_timer_should_fire(vcpu, timer_ctx))
+		return;
+
+	if (!kvm_timer_irq_can_fire(timer_ctx))
+		return;
+
+	/*  The timer has not yet expired, schedule a background timer */
+	BUG_ON(timer_is_armed(timer));
+	timer_arm(timer, kvm_timer_compute_delta(vcpu, timer_ctx));
+}
+
 /*
  * Schedule the background timer before calling kvm_vcpu_block, so that this
  * thread is removed from its waitqueue and made runnable when there's a timer
@@ -306,6 +323,9 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 	if (kvm_timer_update_state(vcpu))
 		return;
 
+	/* Set the background timer for the physical timer emulation. */
+	kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu));
+
 	/*
 	* If we enter the guest with the virtual input level to the VGIC
 	* asserted, then we have already told the VGIC what we need to, and
@@ -368,7 +388,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
-	BUG_ON(timer_is_armed(timer));
+	/*
+	 * This is to cancel the background timer for the physical timer
+	 * emulation if it is set.
+	 */
+	timer_disarm(timer);
 
 	/*
 	 * The guest could have modified the timer registers or the timer
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 08/10] KVM: arm/arm64: Set up a background timer for the physical timer emulation
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

Set a background timer for the EL1 physical timer emulation while VMs
are running, so that VMs get the physical timer interrupts in a timely
manner.

Schedule the background timer on entry to the VM and cancel it on exit.
This would not have any performance impact to the guest OSes that
currently use the virtual timer since the physical timer is always not
enabled.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 virt/kvm/arm/arch_timer.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index d3925e2..b366bb2 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -247,6 +247,23 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+/* Schedule the background timer for the emulated timer. */
+static void kvm_timer_emulate(struct kvm_vcpu *vcpu,
+			      struct arch_timer_context *timer_ctx)
+{
+	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+
+	if (kvm_timer_should_fire(vcpu, timer_ctx))
+		return;
+
+	if (!kvm_timer_irq_can_fire(timer_ctx))
+		return;
+
+	/*  The timer has not yet expired, schedule a background timer */
+	BUG_ON(timer_is_armed(timer));
+	timer_arm(timer, kvm_timer_compute_delta(vcpu, timer_ctx));
+}
+
 /*
  * Schedule the background timer before calling kvm_vcpu_block, so that this
  * thread is removed from its waitqueue and made runnable when there's a timer
@@ -306,6 +323,9 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
 	if (kvm_timer_update_state(vcpu))
 		return;
 
+	/* Set the background timer for the physical timer emulation. */
+	kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu));
+
 	/*
 	* If we enter the guest with the virtual input level to the VGIC
 	* asserted, then we have already told the VGIC what we need to, and
@@ -368,7 +388,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
 {
 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
 
-	BUG_ON(timer_is_armed(timer));
+	/*
+	 * This is to cancel the background timer for the physical timer
+	 * emulation if it is set.
+	 */
+	timer_disarm(timer);
 
 	/*
 	 * The guest could have modified the timer registers or the timer
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 09/10] KVM: arm64: Add the EL1 physical timer access handler
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:04   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

KVM traps on the EL1 phys timer accesses from VMs, but it doesn't handle
those traps. This results in terminating VMs. Instead, set a handler for
the EL1 phys timer access, and inject an undefined exception as an
intermediate step.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e66..fd9e747 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -820,6 +820,30 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+static bool access_cntp_tval(struct kvm_vcpu *vcpu,
+		struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	kvm_inject_undefined(vcpu);
+	return true;
+}
+
+static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
+		struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	kvm_inject_undefined(vcpu);
+	return true;
+}
+
+static bool access_cntp_cval(struct kvm_vcpu *vcpu,
+		struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	kvm_inject_undefined(vcpu);
+	return true;
+}
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1029,6 +1053,16 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* CNTP_TVAL_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b000),
+	  access_cntp_tval },
+	/* CNTP_CTL_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b001),
+	  access_cntp_ctl },
+	/* CNTP_CVAL_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b010),
+	  access_cntp_cval },
+
 	/* PMEVCNTRn_EL0 */
 	PMU_PMEVCNTR_EL0(0),
 	PMU_PMEVCNTR_EL0(1),
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 09/10] KVM: arm64: Add the EL1 physical timer access handler
@ 2017-01-27  1:04   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:04 UTC (permalink / raw)
  To: linux-arm-kernel

KVM traps on the EL1 phys timer accesses from VMs, but it doesn't handle
those traps. This results in terminating VMs. Instead, set a handler for
the EL1 phys timer access, and inject an undefined exception as an
intermediate step.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e66..fd9e747 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -820,6 +820,30 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+static bool access_cntp_tval(struct kvm_vcpu *vcpu,
+		struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	kvm_inject_undefined(vcpu);
+	return true;
+}
+
+static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
+		struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	kvm_inject_undefined(vcpu);
+	return true;
+}
+
+static bool access_cntp_cval(struct kvm_vcpu *vcpu,
+		struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	kvm_inject_undefined(vcpu);
+	return true;
+}
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1029,6 +1053,16 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* CNTP_TVAL_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b000),
+	  access_cntp_tval },
+	/* CNTP_CTL_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b001),
+	  access_cntp_ctl },
+	/* CNTP_CVAL_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b010),
+	  access_cntp_cval },
+
 	/* PMEVCNTRn_EL0 */
 	PMU_PMEVCNTR_EL0(0),
 	PMU_PMEVCNTR_EL0(1),
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
  2017-01-27  1:04 ` Jintack Lim
@ 2017-01-27  1:05   ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:05 UTC (permalink / raw)
  To: pbonzini, rkrcmar, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, andre.przywara, kvm,
	linux-arm-kernel, kvmarm, linux-kernel

Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
Now VMs are able to use the EL1 physical timer.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
 include/kvm/arm_arch_timer.h |  2 ++
 virt/kvm/arm/arch_timer.c    |  2 +-
 3 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fd9e747..adf009f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
 {
-	kvm_inject_undefined(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+	u64 now = kvm_phys_timer_read();
+
+	if (p->is_write)
+		ptimer->cnt_cval = p->regval + now;
+	else
+		p->regval = ptimer->cnt_cval - now;
+
 	return true;
 }
 
@@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
 {
-	kvm_inject_undefined(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+	if (p->is_write) {
+		/* ISTATUS bit is read-only */
+		ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
+	} else {
+		u64 now = kvm_phys_timer_read();
+
+		p->regval = ptimer->cnt_ctl;
+		/* Set ISTATUS bit if it's expired */
+		if (ptimer->cnt_cval <= now)
+			p->regval |= ARCH_TIMER_CTRL_IT_STAT;
+	}
+
 	return true;
 }
 
@@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
 {
-	kvm_inject_undefined(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+	if (p->is_write)
+		ptimer->cnt_cval = p->regval;
+	else
+		p->regval = ptimer->cnt_cval;
+
 	return true;
 }
 
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index a364593..fec99f2 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
 void kvm_timer_schedule(struct kvm_vcpu *vcpu);
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
 
+u64 kvm_phys_timer_read(void);
+
 void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
 
 void kvm_timer_init_vhe(void);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index b366bb2..9eec063 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
 	vcpu_vtimer(vcpu)->active_cleared_last = false;
 }
 
-static u64 kvm_phys_timer_read(void)
+u64 kvm_phys_timer_read(void)
 {
 	return timecounter->cc->read(timecounter->cc);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-27  1:05   ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-27  1:05 UTC (permalink / raw)
  To: linux-arm-kernel

Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
Now VMs are able to use the EL1 physical timer.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
 arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
 include/kvm/arm_arch_timer.h |  2 ++
 virt/kvm/arm/arch_timer.c    |  2 +-
 3 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fd9e747..adf009f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
 {
-	kvm_inject_undefined(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+	u64 now = kvm_phys_timer_read();
+
+	if (p->is_write)
+		ptimer->cnt_cval = p->regval + now;
+	else
+		p->regval = ptimer->cnt_cval - now;
+
 	return true;
 }
 
@@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
 {
-	kvm_inject_undefined(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+	if (p->is_write) {
+		/* ISTATUS bit is read-only */
+		ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
+	} else {
+		u64 now = kvm_phys_timer_read();
+
+		p->regval = ptimer->cnt_ctl;
+		/* Set ISTATUS bit if it's expired */
+		if (ptimer->cnt_cval <= now)
+			p->regval |= ARCH_TIMER_CTRL_IT_STAT;
+	}
+
 	return true;
 }
 
@@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
 {
-	kvm_inject_undefined(vcpu);
+	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+	if (p->is_write)
+		ptimer->cnt_cval = p->regval;
+	else
+		p->regval = ptimer->cnt_cval;
+
 	return true;
 }
 
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index a364593..fec99f2 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
 void kvm_timer_schedule(struct kvm_vcpu *vcpu);
 void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
 
+u64 kvm_phys_timer_read(void);
+
 void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
 
 void kvm_timer_init_vhe(void);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index b366bb2..9eec063 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
 	vcpu_vtimer(vcpu)->active_cleared_last = false;
 }
 
-static u64 kvm_phys_timer_read(void)
+u64 kvm_phys_timer_read(void)
 {
 	return timecounter->cc->read(timecounter->cc);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* Re: [RFC v2 01/10] KVM: arm/arm64: Abstract virtual timer context into separate structure
  2017-01-27  1:04   ` Jintack Lim
  (?)
@ 2017-01-29 11:44     ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 11:44 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:04:51 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Abstract virtual timer context into a separate structure and change all
> callers referring to timer registers, irq state and so on. No change in
> functionality.
>
> This is about to become very handy when adding the EL1 physical timer.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 01/10] KVM: arm/arm64: Abstract virtual timer context into separate structure
@ 2017-01-29 11:44     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 11:44 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, catalin.marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, pbonzini, kvmarm

On Fri, Jan 27 2017 at 01:04:51 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Abstract virtual timer context into a separate structure and change all
> callers referring to timer registers, irq state and so on. No change in
> functionality.
>
> This is about to become very handy when adding the EL1 physical timer.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 01/10] KVM: arm/arm64: Abstract virtual timer context into separate structure
@ 2017-01-29 11:44     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 27 2017 at 01:04:51 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Abstract virtual timer context into a separate structure and change all
> callers referring to timer registers, irq state and so on. No change in
> functionality.
>
> This is about to become very handy when adding the EL1 physical timer.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-27  1:04   ` Jintack Lim
  (?)
@ 2017-01-29 11:54     ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 11:54 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Make cntvoff per each timer context. This is helpful to abstract kvm
> timer functions to work with timer context without considering timer
> types (e.g. physical timer or virtual timer).
>
> This also would pave the way for ever doing adjustments of the cntvoff
> on a per-CPU basis if that should ever make sense.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>  include/kvm/arm_arch_timer.h      |  8 +++-----
>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>  5 files changed, 29 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index d5423ab..f5456a9 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -60,9 +60,6 @@ struct kvm_arch {
>  	/* The last vcpu id that ran on each physical CPU */
>  	int __percpu *last_vcpu_ran;
>  
> -	/* Timer */
> -	struct arch_timer_kvm	timer;
> -
>  	/*
>  	 * Anything that is not used directly from assembly code goes
>  	 * here.
> @@ -75,6 +72,9 @@ struct kvm_arch {
>  	/* Stage-2 page table */
>  	pgd_t *pgd;
>  
> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> +	spinlock_t cntvoff_lock;

Is there any condition where we need this to be a spinlock? I would have
thought that a mutex should have been enough, as this should only be
updated on migration or initialization. Not that it matters much in this
case, but I wondered if there is something I'm missing.

> +
>  	/* Interrupt controller */
>  	struct vgic_dist	vgic;
>  	int max_vcpus;
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e505038..23749a8 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -71,8 +71,8 @@ struct kvm_arch {
>  	/* Interrupt controller */
>  	struct vgic_dist	vgic;
>  
> -	/* Timer */
> -	struct arch_timer_kvm	timer;
> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> +	spinlock_t cntvoff_lock;
>  };
>  
>  #define KVM_NR_MEM_OBJS     40
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index daad3c1..1b9c988 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -23,11 +23,6 @@
>  #include <linux/hrtimer.h>
>  #include <linux/workqueue.h>
>  
> -struct arch_timer_kvm {
> -	/* Virtual offset */
> -	u64			cntvoff;
> -};
> -
>  struct arch_timer_context {
>  	/* Registers: control register, timer value */
>  	u32				cnt_ctl;
> @@ -38,6 +33,9 @@ struct arch_timer_context {
>  
>  	/* Active IRQ state caching */
>  	bool				active_cleared_last;
> +
> +	/* Virtual offset */
> +	u64			cntvoff;
>  };
>  
>  struct arch_timer_cpu {
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 6740efa..fa4c042 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>  {
>  	u64 cval, now;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vcpu_vtimer(vcpu)->cnt_cval;
> -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +	cval = vtimer->cnt_cval;
> +	now = kvm_phys_timer_read() - vtimer->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>  		return false;
>  
>  	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +	now = kvm_phys_timer_read() - vtimer->cntvoff;
>  
>  	return cval <= now;
>  }
> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	return 0;
>  }
>  
> +/* Make the updates of cntvoff for all vtimer contexts atomic */
> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)

Arguably, this acts on the VM itself and not a single vcpu. maybe you
should consider passing the struct kvm pointer to reflect this.

> +{
> +	int i;
> +
> +	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
> +	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
> +		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
> +	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
> +}
> +
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
> +	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());

Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
be welcome (this is not a change in semantics, but it was never obvious
in the existing code).

> +
>  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
>  	timer->timer.function = kvm_timer_expire;
> @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
>  		vtimer->cnt_ctl = value;
>  		break;
>  	case KVM_REG_ARM_TIMER_CNT:
> -		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
> +		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
>  		break;
>  	case KVM_REG_ARM_TIMER_CVAL:
>  		vtimer->cnt_cval = value;
> @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
>  	case KVM_REG_ARM_TIMER_CTL:
>  		return vtimer->cnt_ctl;
>  	case KVM_REG_ARM_TIMER_CNT:
> -		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +		return kvm_phys_timer_read() - vtimer->cntvoff;
>  	case KVM_REG_ARM_TIMER_CVAL:
>  		return vtimer->cnt_cval;
>  	}
> @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
>  
>  void kvm_timer_init(struct kvm *kvm)
>  {
> -	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> +	spin_lock_init(&kvm->arch.cntvoff_lock);
>  }
>  
>  /*
> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> index 0cf0895..4734915 100644
> --- a/virt/kvm/arm/hyp/timer-sr.c
> +++ b/virt/kvm/arm/hyp/timer-sr.c
> @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>  
>  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 val;
> @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  	}
>  
>  	if (timer->enabled) {
> -		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> +		write_sysreg(vtimer->cntvoff, cntvoff_el2);
>  		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
>  		isb();
>  		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-29 11:54     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 11:54 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Make cntvoff per each timer context. This is helpful to abstract kvm
> timer functions to work with timer context without considering timer
> types (e.g. physical timer or virtual timer).
>
> This also would pave the way for ever doing adjustments of the cntvoff
> on a per-CPU basis if that should ever make sense.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>  include/kvm/arm_arch_timer.h      |  8 +++-----
>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>  5 files changed, 29 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index d5423ab..f5456a9 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -60,9 +60,6 @@ struct kvm_arch {
>  	/* The last vcpu id that ran on each physical CPU */
>  	int __percpu *last_vcpu_ran;
>  
> -	/* Timer */
> -	struct arch_timer_kvm	timer;
> -
>  	/*
>  	 * Anything that is not used directly from assembly code goes
>  	 * here.
> @@ -75,6 +72,9 @@ struct kvm_arch {
>  	/* Stage-2 page table */
>  	pgd_t *pgd;
>  
> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> +	spinlock_t cntvoff_lock;

Is there any condition where we need this to be a spinlock? I would have
thought that a mutex should have been enough, as this should only be
updated on migration or initialization. Not that it matters much in this
case, but I wondered if there is something I'm missing.

> +
>  	/* Interrupt controller */
>  	struct vgic_dist	vgic;
>  	int max_vcpus;
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e505038..23749a8 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -71,8 +71,8 @@ struct kvm_arch {
>  	/* Interrupt controller */
>  	struct vgic_dist	vgic;
>  
> -	/* Timer */
> -	struct arch_timer_kvm	timer;
> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> +	spinlock_t cntvoff_lock;
>  };
>  
>  #define KVM_NR_MEM_OBJS     40
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index daad3c1..1b9c988 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -23,11 +23,6 @@
>  #include <linux/hrtimer.h>
>  #include <linux/workqueue.h>
>  
> -struct arch_timer_kvm {
> -	/* Virtual offset */
> -	u64			cntvoff;
> -};
> -
>  struct arch_timer_context {
>  	/* Registers: control register, timer value */
>  	u32				cnt_ctl;
> @@ -38,6 +33,9 @@ struct arch_timer_context {
>  
>  	/* Active IRQ state caching */
>  	bool				active_cleared_last;
> +
> +	/* Virtual offset */
> +	u64			cntvoff;
>  };
>  
>  struct arch_timer_cpu {
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 6740efa..fa4c042 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>  {
>  	u64 cval, now;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vcpu_vtimer(vcpu)->cnt_cval;
> -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +	cval = vtimer->cnt_cval;
> +	now = kvm_phys_timer_read() - vtimer->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>  		return false;
>  
>  	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +	now = kvm_phys_timer_read() - vtimer->cntvoff;
>  
>  	return cval <= now;
>  }
> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	return 0;
>  }
>  
> +/* Make the updates of cntvoff for all vtimer contexts atomic */
> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)

Arguably, this acts on the VM itself and not a single vcpu. maybe you
should consider passing the struct kvm pointer to reflect this.

> +{
> +	int i;
> +
> +	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
> +	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
> +		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
> +	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
> +}
> +
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
> +	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());

Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
be welcome (this is not a change in semantics, but it was never obvious
in the existing code).

> +
>  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
>  	timer->timer.function = kvm_timer_expire;
> @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
>  		vtimer->cnt_ctl = value;
>  		break;
>  	case KVM_REG_ARM_TIMER_CNT:
> -		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
> +		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
>  		break;
>  	case KVM_REG_ARM_TIMER_CVAL:
>  		vtimer->cnt_cval = value;
> @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
>  	case KVM_REG_ARM_TIMER_CTL:
>  		return vtimer->cnt_ctl;
>  	case KVM_REG_ARM_TIMER_CNT:
> -		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +		return kvm_phys_timer_read() - vtimer->cntvoff;
>  	case KVM_REG_ARM_TIMER_CVAL:
>  		return vtimer->cnt_cval;
>  	}
> @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
>  
>  void kvm_timer_init(struct kvm *kvm)
>  {
> -	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> +	spin_lock_init(&kvm->arch.cntvoff_lock);
>  }
>  
>  /*
> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> index 0cf0895..4734915 100644
> --- a/virt/kvm/arm/hyp/timer-sr.c
> +++ b/virt/kvm/arm/hyp/timer-sr.c
> @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>  
>  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 val;
> @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  	}
>  
>  	if (timer->enabled) {
> -		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> +		write_sysreg(vtimer->cntvoff, cntvoff_el2);
>  		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
>  		isb();
>  		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-29 11:54     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 11:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Make cntvoff per each timer context. This is helpful to abstract kvm
> timer functions to work with timer context without considering timer
> types (e.g. physical timer or virtual timer).
>
> This also would pave the way for ever doing adjustments of the cntvoff
> on a per-CPU basis if that should ever make sense.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>  include/kvm/arm_arch_timer.h      |  8 +++-----
>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>  5 files changed, 29 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index d5423ab..f5456a9 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -60,9 +60,6 @@ struct kvm_arch {
>  	/* The last vcpu id that ran on each physical CPU */
>  	int __percpu *last_vcpu_ran;
>  
> -	/* Timer */
> -	struct arch_timer_kvm	timer;
> -
>  	/*
>  	 * Anything that is not used directly from assembly code goes
>  	 * here.
> @@ -75,6 +72,9 @@ struct kvm_arch {
>  	/* Stage-2 page table */
>  	pgd_t *pgd;
>  
> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> +	spinlock_t cntvoff_lock;

Is there any condition where we need this to be a spinlock? I would have
thought that a mutex should have been enough, as this should only be
updated on migration or initialization. Not that it matters much in this
case, but I wondered if there is something I'm missing.

> +
>  	/* Interrupt controller */
>  	struct vgic_dist	vgic;
>  	int max_vcpus;
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e505038..23749a8 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -71,8 +71,8 @@ struct kvm_arch {
>  	/* Interrupt controller */
>  	struct vgic_dist	vgic;
>  
> -	/* Timer */
> -	struct arch_timer_kvm	timer;
> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> +	spinlock_t cntvoff_lock;
>  };
>  
>  #define KVM_NR_MEM_OBJS     40
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index daad3c1..1b9c988 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -23,11 +23,6 @@
>  #include <linux/hrtimer.h>
>  #include <linux/workqueue.h>
>  
> -struct arch_timer_kvm {
> -	/* Virtual offset */
> -	u64			cntvoff;
> -};
> -
>  struct arch_timer_context {
>  	/* Registers: control register, timer value */
>  	u32				cnt_ctl;
> @@ -38,6 +33,9 @@ struct arch_timer_context {
>  
>  	/* Active IRQ state caching */
>  	bool				active_cleared_last;
> +
> +	/* Virtual offset */
> +	u64			cntvoff;
>  };
>  
>  struct arch_timer_cpu {
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 6740efa..fa4c042 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>  {
>  	u64 cval, now;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vcpu_vtimer(vcpu)->cnt_cval;
> -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +	cval = vtimer->cnt_cval;
> +	now = kvm_phys_timer_read() - vtimer->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>  		return false;
>  
>  	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +	now = kvm_phys_timer_read() - vtimer->cntvoff;
>  
>  	return cval <= now;
>  }
> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	return 0;
>  }
>  
> +/* Make the updates of cntvoff for all vtimer contexts atomic */
> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)

Arguably, this acts on the VM itself and not a single vcpu. maybe you
should consider passing the struct kvm pointer to reflect this.

> +{
> +	int i;
> +
> +	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
> +	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
> +		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
> +	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
> +}
> +
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
> +	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());

Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
be welcome (this is not a change in semantics, but it was never obvious
in the existing code).

> +
>  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
>  	timer->timer.function = kvm_timer_expire;
> @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
>  		vtimer->cnt_ctl = value;
>  		break;
>  	case KVM_REG_ARM_TIMER_CNT:
> -		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
> +		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
>  		break;
>  	case KVM_REG_ARM_TIMER_CVAL:
>  		vtimer->cnt_cval = value;
> @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
>  	case KVM_REG_ARM_TIMER_CTL:
>  		return vtimer->cnt_ctl;
>  	case KVM_REG_ARM_TIMER_CNT:
> -		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> +		return kvm_phys_timer_read() - vtimer->cntvoff;
>  	case KVM_REG_ARM_TIMER_CVAL:
>  		return vtimer->cnt_cval;
>  	}
> @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
>  
>  void kvm_timer_init(struct kvm *kvm)
>  {
> -	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> +	spin_lock_init(&kvm->arch.cntvoff_lock);
>  }
>  
>  /*
> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> index 0cf0895..4734915 100644
> --- a/virt/kvm/arm/hyp/timer-sr.c
> +++ b/virt/kvm/arm/hyp/timer-sr.c
> @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>  
>  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 val;
> @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  	}
>  
>  	if (timer->enabled) {
> -		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> +		write_sysreg(vtimer->cntvoff, cntvoff_el2);
>  		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
>  		isb();
>  		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  2017-01-27  1:04   ` Jintack Lim
  (?)
@ 2017-01-29 12:01     ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 12:01 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:04:53 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Now that we have a separate structure for timer context, make functions
> general so that they can work with any timer context, not just the

  generic?

> virtual timer context.  This does not change the virtual timer
> functionality.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/arm.c           |  2 +-
>  include/kvm/arm_arch_timer.h |  3 ++-
>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>  3 files changed, 30 insertions(+), 30 deletions(-)
>
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9d74464..9a34a3c 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_timer_should_fire(vcpu);
> +	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>  }
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 1b9c988..d921d20 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx);
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index fa4c042..f72005a 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
> +				   struct arch_timer_context *timer_ctx)
>  {
>  	u64 cval, now;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	 * PoV (NTP on the host may have forced it to expire
>  	 * early). If we should have slept longer, restart it.
>  	 */
> -	ns = kvm_timer_compute_delta(vcpu);
> +	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>  	if (unlikely(ns)) {
>  		hrtimer_forward_now(hrt, ns_to_ktime(ns));
>  		return HRTIMER_RESTART;
> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	return HRTIMER_NORESTART;
>  }
>  
> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> -
> -	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> -		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
> +	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> +		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>  }
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 cval, now;
>  
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(timer_ctx))
>  		return false;
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	return cval <= now;
>  }
>  
> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> +					struct arch_timer_context *timer_ctx)
>  {
>  	int ret;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(!vgic_initialized(vcpu->kvm));
>  
> -	vtimer->active_cleared_last = false;
> -	vtimer->irq.level = new_level;
> -	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
> -				   vtimer->irq.level);
> +	timer_ctx->active_cleared_last = false;
> +	timer_ctx->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
> +				   timer_ctx->irq.level);
>  	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> -					 vtimer->irq.irq,
> -					 vtimer->irq.level);
> +					 timer_ctx->irq.irq,
> +					 timer_ctx->irq.level);
>  	WARN_ON(ret);
>  }
>  
> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>  		return -ENODEV;
>  
> -	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
> -		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
> +	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
> +		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
>  	return 0;
>  }
> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(timer_is_armed(timer));
>  
> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  	 * already expired, because kvm_vcpu_block will return before putting
>  	 * the thread to sleep.
>  	 */
> -	if (kvm_timer_should_fire(vcpu))
> +	if (kvm_timer_should_fire(vcpu, vtimer))
>  		return;
>  
>  	/*
>  	 * If the timer is not capable of raising interrupts (disabled or
>  	 * masked), then there's no more work for us to do.
>  	 */
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(vtimer))
>  		return;
>  
>  	/*  The timer has not yet expired, schedule a background timer */
> -	timer_arm(timer, kvm_timer_compute_delta(vcpu));
> +	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>  }
>  
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-29 12:01     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 12:01 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, catalin.marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, pbonzini, kvmarm

On Fri, Jan 27 2017 at 01:04:53 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Now that we have a separate structure for timer context, make functions
> general so that they can work with any timer context, not just the

  generic?

> virtual timer context.  This does not change the virtual timer
> functionality.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/arm.c           |  2 +-
>  include/kvm/arm_arch_timer.h |  3 ++-
>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>  3 files changed, 30 insertions(+), 30 deletions(-)
>
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9d74464..9a34a3c 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_timer_should_fire(vcpu);
> +	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>  }
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 1b9c988..d921d20 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx);
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index fa4c042..f72005a 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
> +				   struct arch_timer_context *timer_ctx)
>  {
>  	u64 cval, now;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	 * PoV (NTP on the host may have forced it to expire
>  	 * early). If we should have slept longer, restart it.
>  	 */
> -	ns = kvm_timer_compute_delta(vcpu);
> +	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>  	if (unlikely(ns)) {
>  		hrtimer_forward_now(hrt, ns_to_ktime(ns));
>  		return HRTIMER_RESTART;
> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	return HRTIMER_NORESTART;
>  }
>  
> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> -
> -	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> -		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
> +	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> +		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>  }
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 cval, now;
>  
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(timer_ctx))
>  		return false;
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	return cval <= now;
>  }
>  
> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> +					struct arch_timer_context *timer_ctx)
>  {
>  	int ret;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(!vgic_initialized(vcpu->kvm));
>  
> -	vtimer->active_cleared_last = false;
> -	vtimer->irq.level = new_level;
> -	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
> -				   vtimer->irq.level);
> +	timer_ctx->active_cleared_last = false;
> +	timer_ctx->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
> +				   timer_ctx->irq.level);
>  	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> -					 vtimer->irq.irq,
> -					 vtimer->irq.level);
> +					 timer_ctx->irq.irq,
> +					 timer_ctx->irq.level);
>  	WARN_ON(ret);
>  }
>  
> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>  		return -ENODEV;
>  
> -	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
> -		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
> +	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
> +		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
>  	return 0;
>  }
> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(timer_is_armed(timer));
>  
> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  	 * already expired, because kvm_vcpu_block will return before putting
>  	 * the thread to sleep.
>  	 */
> -	if (kvm_timer_should_fire(vcpu))
> +	if (kvm_timer_should_fire(vcpu, vtimer))
>  		return;
>  
>  	/*
>  	 * If the timer is not capable of raising interrupts (disabled or
>  	 * masked), then there's no more work for us to do.
>  	 */
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(vtimer))
>  		return;
>  
>  	/*  The timer has not yet expired, schedule a background timer */
> -	timer_arm(timer, kvm_timer_compute_delta(vcpu));
> +	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>  }
>  
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-29 12:01     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 12:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 27 2017 at 01:04:53 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Now that we have a separate structure for timer context, make functions
> general so that they can work with any timer context, not just the

  generic?

> virtual timer context.  This does not change the virtual timer
> functionality.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/arm.c           |  2 +-
>  include/kvm/arm_arch_timer.h |  3 ++-
>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>  3 files changed, 30 insertions(+), 30 deletions(-)
>
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9d74464..9a34a3c 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_timer_should_fire(vcpu);
> +	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>  }
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 1b9c988..d921d20 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx);
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index fa4c042..f72005a 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
> +				   struct arch_timer_context *timer_ctx)
>  {
>  	u64 cval, now;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	 * PoV (NTP on the host may have forced it to expire
>  	 * early). If we should have slept longer, restart it.
>  	 */
> -	ns = kvm_timer_compute_delta(vcpu);
> +	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>  	if (unlikely(ns)) {
>  		hrtimer_forward_now(hrt, ns_to_ktime(ns));
>  		return HRTIMER_RESTART;
> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	return HRTIMER_NORESTART;
>  }
>  
> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> -
> -	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> -		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
> +	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> +		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>  }
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 cval, now;
>  
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(timer_ctx))
>  		return false;
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	return cval <= now;
>  }
>  
> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> +					struct arch_timer_context *timer_ctx)
>  {
>  	int ret;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(!vgic_initialized(vcpu->kvm));
>  
> -	vtimer->active_cleared_last = false;
> -	vtimer->irq.level = new_level;
> -	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
> -				   vtimer->irq.level);
> +	timer_ctx->active_cleared_last = false;
> +	timer_ctx->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
> +				   timer_ctx->irq.level);
>  	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> -					 vtimer->irq.irq,
> -					 vtimer->irq.level);
> +					 timer_ctx->irq.irq,
> +					 timer_ctx->irq.level);
>  	WARN_ON(ret);
>  }
>  
> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>  		return -ENODEV;
>  
> -	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
> -		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
> +	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
> +		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
>  	return 0;
>  }
> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(timer_is_armed(timer));
>  
> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  	 * already expired, because kvm_vcpu_block will return before putting
>  	 * the thread to sleep.
>  	 */
> -	if (kvm_timer_should_fire(vcpu))
> +	if (kvm_timer_should_fire(vcpu, vtimer))
>  		return;
>  
>  	/*
>  	 * If the timer is not capable of raising interrupts (disabled or
>  	 * masked), then there's no more work for us to do.
>  	 */
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(vtimer))
>  		return;
>  
>  	/*  The timer has not yet expired, schedule a background timer */
> -	timer_arm(timer, kvm_timer_compute_delta(vcpu));
> +	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>  }
>  
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
  2017-01-27  1:04   ` Jintack Lim
  (?)
@ 2017-01-29 12:07     ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 12:07 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Initialize the emulated EL1 physical timer with the default irq number.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/reset.c         | 9 ++++++++-
>  arch/arm64/kvm/reset.c       | 9 ++++++++-
>  include/kvm/arm_arch_timer.h | 3 ++-
>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
>  4 files changed, 25 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> index 4b5e802..1da8b2d 100644
> --- a/arch/arm/kvm/reset.c
> +++ b/arch/arm/kvm/reset.c
> @@ -37,6 +37,11 @@
>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
>  };
>  
> +static const struct kvm_irq_level cortexa_ptimer_irq = {
> +	{ .irq = 30 },
> +	.level = 1,
> +};

At some point, we'll have to make that discoverable/configurable. Maybe
the time for a discoverable arch timer has come (see below).

> +
>  static const struct kvm_irq_level cortexa_vtimer_irq = {
>  	{ .irq = 27 },
>  	.level = 1,
> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_regs *reset_regs;
>  	const struct kvm_irq_level *cpu_vtimer_irq;
> +	const struct kvm_irq_level *cpu_ptimer_irq;
>  
>  	switch (vcpu->arch.target) {
>  	case KVM_ARM_TARGET_CORTEX_A7:
> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		reset_regs = &cortexa_regs_reset;
>  		vcpu->arch.midr = read_cpuid_id();
>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
>  		break;
>  	default:
>  		return -ENODEV;
> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	kvm_reset_coprocs(vcpu);
>  
>  	/* Reset arch_timer context */
> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>  }
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index e95d4f6..d9e9697 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -46,6 +46,11 @@
>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
>  };
>  
> +static const struct kvm_irq_level default_ptimer_irq = {
> +	.irq	= 30,
> +	.level	= 1,
> +};
> +
>  static const struct kvm_irq_level default_vtimer_irq = {
>  	.irq	= 27,
>  	.level	= 1,
> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	const struct kvm_irq_level *cpu_vtimer_irq;
> +	const struct kvm_irq_level *cpu_ptimer_irq;
>  	const struct kvm_regs *cpu_reset;
>  
>  	switch (vcpu->arch.target) {
> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		}
>  
>  		cpu_vtimer_irq = &default_vtimer_irq;
> +		cpu_ptimer_irq = &default_ptimer_irq;
>  		break;
>  	}
>  
> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	kvm_pmu_vcpu_reset(vcpu);
>  
>  	/* Reset timer */
> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>  }
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 69f648b..a364593 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
>  void kvm_timer_init(struct kvm *kvm);
>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> -			 const struct kvm_irq_level *irq);
> +			 const struct kvm_irq_level *virt_irq,
> +			 const struct kvm_irq_level *phys_irq);
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index f72005a..0f6e935 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
>  }
>  
>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> -			 const struct kvm_irq_level *irq)
> +			 const struct kvm_irq_level *virt_irq,
> +			 const struct kvm_irq_level *phys_irq)
>  {
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>  
>  	/*
>  	 * The vcpu timer irq number cannot be determined in
> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	 * kvm_vcpu_set_target(). To handle this, we determine
>  	 * vcpu timer irq number when the vcpu is reset.
>  	 */
> -	vtimer->irq.irq = irq->irq;
> +	vtimer->irq.irq = virt_irq->irq;
> +	ptimer->irq.irq = phys_irq->irq;
>  
>  	/*
>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	 * the ARMv7 architecture.
>  	 */
>  	vtimer->cnt_ctl = 0;
> +	ptimer->cnt_ctl = 0;
>  	kvm_timer_update_state(vcpu);
>  
>  	return 0;
> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> +	vcpu_ptimer(vcpu)->cntvoff = 0;

This is quite contentious, IMHO. Do we really want to expose the delta
between the virtual and physical counters? That's a clear indication to
the guest that it is virtualized. I"m not sure it matters, but I think
we should at least make a conscious choice, and maybe give the
opportunity to userspace to select the desired behaviour.

>  
>  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-29 12:07     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 12:07 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, catalin.marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, pbonzini, kvmarm

On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Initialize the emulated EL1 physical timer with the default irq number.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/reset.c         | 9 ++++++++-
>  arch/arm64/kvm/reset.c       | 9 ++++++++-
>  include/kvm/arm_arch_timer.h | 3 ++-
>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
>  4 files changed, 25 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> index 4b5e802..1da8b2d 100644
> --- a/arch/arm/kvm/reset.c
> +++ b/arch/arm/kvm/reset.c
> @@ -37,6 +37,11 @@
>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
>  };
>  
> +static const struct kvm_irq_level cortexa_ptimer_irq = {
> +	{ .irq = 30 },
> +	.level = 1,
> +};

At some point, we'll have to make that discoverable/configurable. Maybe
the time for a discoverable arch timer has come (see below).

> +
>  static const struct kvm_irq_level cortexa_vtimer_irq = {
>  	{ .irq = 27 },
>  	.level = 1,
> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_regs *reset_regs;
>  	const struct kvm_irq_level *cpu_vtimer_irq;
> +	const struct kvm_irq_level *cpu_ptimer_irq;
>  
>  	switch (vcpu->arch.target) {
>  	case KVM_ARM_TARGET_CORTEX_A7:
> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		reset_regs = &cortexa_regs_reset;
>  		vcpu->arch.midr = read_cpuid_id();
>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
>  		break;
>  	default:
>  		return -ENODEV;
> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	kvm_reset_coprocs(vcpu);
>  
>  	/* Reset arch_timer context */
> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>  }
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index e95d4f6..d9e9697 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -46,6 +46,11 @@
>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
>  };
>  
> +static const struct kvm_irq_level default_ptimer_irq = {
> +	.irq	= 30,
> +	.level	= 1,
> +};
> +
>  static const struct kvm_irq_level default_vtimer_irq = {
>  	.irq	= 27,
>  	.level	= 1,
> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	const struct kvm_irq_level *cpu_vtimer_irq;
> +	const struct kvm_irq_level *cpu_ptimer_irq;
>  	const struct kvm_regs *cpu_reset;
>  
>  	switch (vcpu->arch.target) {
> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		}
>  
>  		cpu_vtimer_irq = &default_vtimer_irq;
> +		cpu_ptimer_irq = &default_ptimer_irq;
>  		break;
>  	}
>  
> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	kvm_pmu_vcpu_reset(vcpu);
>  
>  	/* Reset timer */
> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>  }
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 69f648b..a364593 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
>  void kvm_timer_init(struct kvm *kvm);
>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> -			 const struct kvm_irq_level *irq);
> +			 const struct kvm_irq_level *virt_irq,
> +			 const struct kvm_irq_level *phys_irq);
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index f72005a..0f6e935 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
>  }
>  
>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> -			 const struct kvm_irq_level *irq)
> +			 const struct kvm_irq_level *virt_irq,
> +			 const struct kvm_irq_level *phys_irq)
>  {
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>  
>  	/*
>  	 * The vcpu timer irq number cannot be determined in
> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	 * kvm_vcpu_set_target(). To handle this, we determine
>  	 * vcpu timer irq number when the vcpu is reset.
>  	 */
> -	vtimer->irq.irq = irq->irq;
> +	vtimer->irq.irq = virt_irq->irq;
> +	ptimer->irq.irq = phys_irq->irq;
>  
>  	/*
>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	 * the ARMv7 architecture.
>  	 */
>  	vtimer->cnt_ctl = 0;
> +	ptimer->cnt_ctl = 0;
>  	kvm_timer_update_state(vcpu);
>  
>  	return 0;
> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> +	vcpu_ptimer(vcpu)->cntvoff = 0;

This is quite contentious, IMHO. Do we really want to expose the delta
between the virtual and physical counters? That's a clear indication to
the guest that it is virtualized. I"m not sure it matters, but I think
we should at least make a conscious choice, and maybe give the
opportunity to userspace to select the desired behaviour.

>  
>  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-29 12:07     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 12:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Initialize the emulated EL1 physical timer with the default irq number.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/reset.c         | 9 ++++++++-
>  arch/arm64/kvm/reset.c       | 9 ++++++++-
>  include/kvm/arm_arch_timer.h | 3 ++-
>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
>  4 files changed, 25 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> index 4b5e802..1da8b2d 100644
> --- a/arch/arm/kvm/reset.c
> +++ b/arch/arm/kvm/reset.c
> @@ -37,6 +37,11 @@
>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
>  };
>  
> +static const struct kvm_irq_level cortexa_ptimer_irq = {
> +	{ .irq = 30 },
> +	.level = 1,
> +};

At some point, we'll have to make that discoverable/configurable. Maybe
the time for a discoverable arch timer has come (see below).

> +
>  static const struct kvm_irq_level cortexa_vtimer_irq = {
>  	{ .irq = 27 },
>  	.level = 1,
> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_regs *reset_regs;
>  	const struct kvm_irq_level *cpu_vtimer_irq;
> +	const struct kvm_irq_level *cpu_ptimer_irq;
>  
>  	switch (vcpu->arch.target) {
>  	case KVM_ARM_TARGET_CORTEX_A7:
> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		reset_regs = &cortexa_regs_reset;
>  		vcpu->arch.midr = read_cpuid_id();
>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
>  		break;
>  	default:
>  		return -ENODEV;
> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	kvm_reset_coprocs(vcpu);
>  
>  	/* Reset arch_timer context */
> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>  }
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index e95d4f6..d9e9697 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -46,6 +46,11 @@
>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
>  };
>  
> +static const struct kvm_irq_level default_ptimer_irq = {
> +	.irq	= 30,
> +	.level	= 1,
> +};
> +
>  static const struct kvm_irq_level default_vtimer_irq = {
>  	.irq	= 27,
>  	.level	= 1,
> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	const struct kvm_irq_level *cpu_vtimer_irq;
> +	const struct kvm_irq_level *cpu_ptimer_irq;
>  	const struct kvm_regs *cpu_reset;
>  
>  	switch (vcpu->arch.target) {
> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		}
>  
>  		cpu_vtimer_irq = &default_vtimer_irq;
> +		cpu_ptimer_irq = &default_ptimer_irq;
>  		break;
>  	}
>  
> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	kvm_pmu_vcpu_reset(vcpu);
>  
>  	/* Reset timer */
> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>  }
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 69f648b..a364593 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
>  void kvm_timer_init(struct kvm *kvm);
>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> -			 const struct kvm_irq_level *irq);
> +			 const struct kvm_irq_level *virt_irq,
> +			 const struct kvm_irq_level *phys_irq);
>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index f72005a..0f6e935 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
>  }
>  
>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> -			 const struct kvm_irq_level *irq)
> +			 const struct kvm_irq_level *virt_irq,
> +			 const struct kvm_irq_level *phys_irq)
>  {
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>  
>  	/*
>  	 * The vcpu timer irq number cannot be determined in
> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	 * kvm_vcpu_set_target(). To handle this, we determine
>  	 * vcpu timer irq number when the vcpu is reset.
>  	 */
> -	vtimer->irq.irq = irq->irq;
> +	vtimer->irq.irq = virt_irq->irq;
> +	ptimer->irq.irq = phys_irq->irq;
>  
>  	/*
>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  	 * the ARMv7 architecture.
>  	 */
>  	vtimer->cnt_ctl = 0;
> +	ptimer->cnt_ctl = 0;
>  	kvm_timer_update_state(vcpu);
>  
>  	return 0;
> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  
>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> +	vcpu_ptimer(vcpu)->cntvoff = 0;

This is quite contentious, IMHO. Do we really want to expose the delta
between the virtual and physical counters? That's a clear indication to
the guest that it is virtualized. I"m not sure it matters, but I think
we should at least make a conscious choice, and maybe give the
opportunity to userspace to select the desired behaviour.

>  
>  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-27  1:04   ` Jintack Lim
  (?)
@ 2017-01-29 15:21     ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:21 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Now that we maintain the EL1 physical timer register states of VMs,
> update the physical timer interrupt level along with the virtual one.
>
> Note that the emulated EL1 physical timer is not mapped to any hardware
> timer, so we call a proper vgic function.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 0f6e935..3b6bd50 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>  	WARN_ON(ret);
>  }
>  
> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> +				 struct arch_timer_context *timer)
> +{
> +	int ret;
> +
> +	BUG_ON(!vgic_initialized(vcpu->kvm));

Although I've added my fair share of BUG_ON() in the code base, I've
since reconsidered my position. If we get in a situation where the vgic
is not initialized, maybe it would be better to just WARN_ON and return
early rather than killing the whole box. Thoughts?

> +
> +	timer->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
> +				   timer->irq.level);
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer->irq.irq,
> +				  timer->irq.level);
> +	WARN_ON(ret);
> +}
> +
>  /*
>   * Check if there was a change in the timer state (should we raise or lower
>   * the line level to the GIC).
> @@ -188,6 +203,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>  
>  	/*
>  	 * If userspace modified the timer registers via SET_ONE_REG before
> @@ -201,6 +217,10 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>  		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
> +	/* The emulated EL1 physical timer irq is not mapped to hardware */

Maybe a slightly better comment would be saying that we're using a
purely virtual interrupt, unrelated to the hardware interrupt.

> +	if (kvm_timer_should_fire(vcpu, ptimer) != ptimer->irq.level)
> +		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
> +
>  	return 0;
>  }

Otherwise looks good.

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-29 15:21     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:21 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, catalin.marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, pbonzini, kvmarm

On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Now that we maintain the EL1 physical timer register states of VMs,
> update the physical timer interrupt level along with the virtual one.
>
> Note that the emulated EL1 physical timer is not mapped to any hardware
> timer, so we call a proper vgic function.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 0f6e935..3b6bd50 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>  	WARN_ON(ret);
>  }
>  
> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> +				 struct arch_timer_context *timer)
> +{
> +	int ret;
> +
> +	BUG_ON(!vgic_initialized(vcpu->kvm));

Although I've added my fair share of BUG_ON() in the code base, I've
since reconsidered my position. If we get in a situation where the vgic
is not initialized, maybe it would be better to just WARN_ON and return
early rather than killing the whole box. Thoughts?

> +
> +	timer->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
> +				   timer->irq.level);
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer->irq.irq,
> +				  timer->irq.level);
> +	WARN_ON(ret);
> +}
> +
>  /*
>   * Check if there was a change in the timer state (should we raise or lower
>   * the line level to the GIC).
> @@ -188,6 +203,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>  
>  	/*
>  	 * If userspace modified the timer registers via SET_ONE_REG before
> @@ -201,6 +217,10 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>  		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
> +	/* The emulated EL1 physical timer irq is not mapped to hardware */

Maybe a slightly better comment would be saying that we're using a
purely virtual interrupt, unrelated to the hardware interrupt.

> +	if (kvm_timer_should_fire(vcpu, ptimer) != ptimer->irq.level)
> +		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
> +
>  	return 0;
>  }

Otherwise looks good.

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-29 15:21     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:21 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Now that we maintain the EL1 physical timer register states of VMs,
> update the physical timer interrupt level along with the virtual one.
>
> Note that the emulated EL1 physical timer is not mapped to any hardware
> timer, so we call a proper vgic function.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 0f6e935..3b6bd50 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>  	WARN_ON(ret);
>  }
>  
> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> +				 struct arch_timer_context *timer)
> +{
> +	int ret;
> +
> +	BUG_ON(!vgic_initialized(vcpu->kvm));

Although I've added my fair share of BUG_ON() in the code base, I've
since reconsidered my position. If we get in a situation where the vgic
is not initialized, maybe it would be better to just WARN_ON and return
early rather than killing the whole box. Thoughts?

> +
> +	timer->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
> +				   timer->irq.level);
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer->irq.irq,
> +				  timer->irq.level);
> +	WARN_ON(ret);
> +}
> +
>  /*
>   * Check if there was a change in the timer state (should we raise or lower
>   * the line level to the GIC).
> @@ -188,6 +203,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>  
>  	/*
>  	 * If userspace modified the timer registers via SET_ONE_REG before
> @@ -201,6 +217,10 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>  		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
> +	/* The emulated EL1 physical timer irq is not mapped to hardware */

Maybe a slightly better comment would be saying that we're using a
purely virtual interrupt, unrelated to the hardware interrupt.

> +	if (kvm_timer_should_fire(vcpu, ptimer) != ptimer->irq.level)
> +		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
> +
>  	return 0;
>  }

Otherwise looks good.

          M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
  2017-01-27  1:05   ` Jintack Lim
  (?)
@ 2017-01-29 15:44     ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:44 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Fri, Jan 27 2017 at 01:05:00 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
> Now VMs are able to use the EL1 physical timer.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
>  include/kvm/arm_arch_timer.h |  2 ++
>  virt/kvm/arm/arch_timer.c    |  2 +-
>  3 files changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index fd9e747..adf009f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +	u64 now = kvm_phys_timer_read();
> +
> +	if (p->is_write)
> +		ptimer->cnt_cval = p->regval + now;
> +	else
> +		p->regval = ptimer->cnt_cval - now;
> +
>  	return true;
>  }
>  
> @@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +
> +	if (p->is_write) {
> +		/* ISTATUS bit is read-only */
> +		ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
> +	} else {
> +		u64 now = kvm_phys_timer_read();
> +
> +		p->regval = ptimer->cnt_ctl;
> +		/* Set ISTATUS bit if it's expired */
> +		if (ptimer->cnt_cval <= now)
> +			p->regval |= ARCH_TIMER_CTRL_IT_STAT;
> +	}

Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
have at hand (version h) seems to indicate that we should, but we should
check with the latest and greatest...

> +
>  	return true;
>  }
>  
> @@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +
> +	if (p->is_write)
> +		ptimer->cnt_cval = p->regval;
> +	else
> +		p->regval = ptimer->cnt_cval;
> +
>  	return true;
>  }
>  
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index a364593..fec99f2 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> +u64 kvm_phys_timer_read(void);
> +
>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>  
>  void kvm_timer_init_vhe(void);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index b366bb2..9eec063 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
>  	vcpu_vtimer(vcpu)->active_cleared_last = false;
>  }
>  
> -static u64 kvm_phys_timer_read(void)
> +u64 kvm_phys_timer_read(void)
>  {
>  	return timecounter->cc->read(timecounter->cc);
>  }

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-29 15:44     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:44 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, catalin.marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, pbonzini, kvmarm

On Fri, Jan 27 2017 at 01:05:00 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
> Now VMs are able to use the EL1 physical timer.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
>  include/kvm/arm_arch_timer.h |  2 ++
>  virt/kvm/arm/arch_timer.c    |  2 +-
>  3 files changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index fd9e747..adf009f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +	u64 now = kvm_phys_timer_read();
> +
> +	if (p->is_write)
> +		ptimer->cnt_cval = p->regval + now;
> +	else
> +		p->regval = ptimer->cnt_cval - now;
> +
>  	return true;
>  }
>  
> @@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +
> +	if (p->is_write) {
> +		/* ISTATUS bit is read-only */
> +		ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
> +	} else {
> +		u64 now = kvm_phys_timer_read();
> +
> +		p->regval = ptimer->cnt_ctl;
> +		/* Set ISTATUS bit if it's expired */
> +		if (ptimer->cnt_cval <= now)
> +			p->regval |= ARCH_TIMER_CTRL_IT_STAT;
> +	}

Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
have at hand (version h) seems to indicate that we should, but we should
check with the latest and greatest...

> +
>  	return true;
>  }
>  
> @@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +
> +	if (p->is_write)
> +		ptimer->cnt_cval = p->regval;
> +	else
> +		p->regval = ptimer->cnt_cval;
> +
>  	return true;
>  }
>  
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index a364593..fec99f2 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> +u64 kvm_phys_timer_read(void);
> +
>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>  
>  void kvm_timer_init_vhe(void);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index b366bb2..9eec063 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
>  	vcpu_vtimer(vcpu)->active_cleared_last = false;
>  }
>  
> -static u64 kvm_phys_timer_read(void)
> +u64 kvm_phys_timer_read(void)
>  {
>  	return timecounter->cc->read(timecounter->cc);
>  }

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-29 15:44     ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 27 2017 at 01:05:00 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
> Now VMs are able to use the EL1 physical timer.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
>  include/kvm/arm_arch_timer.h |  2 ++
>  virt/kvm/arm/arch_timer.c    |  2 +-
>  3 files changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index fd9e747..adf009f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +	u64 now = kvm_phys_timer_read();
> +
> +	if (p->is_write)
> +		ptimer->cnt_cval = p->regval + now;
> +	else
> +		p->regval = ptimer->cnt_cval - now;
> +
>  	return true;
>  }
>  
> @@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +
> +	if (p->is_write) {
> +		/* ISTATUS bit is read-only */
> +		ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
> +	} else {
> +		u64 now = kvm_phys_timer_read();
> +
> +		p->regval = ptimer->cnt_ctl;
> +		/* Set ISTATUS bit if it's expired */
> +		if (ptimer->cnt_cval <= now)
> +			p->regval |= ARCH_TIMER_CTRL_IT_STAT;
> +	}

Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
have at hand (version h) seems to indicate that we should, but we should
check with the latest and greatest...

> +
>  	return true;
>  }
>  
> @@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
>  {
> -	kvm_inject_undefined(vcpu);
> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> +
> +	if (p->is_write)
> +		ptimer->cnt_cval = p->regval;
> +	else
> +		p->regval = ptimer->cnt_cval;
> +
>  	return true;
>  }
>  
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index a364593..fec99f2 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> +u64 kvm_phys_timer_read(void);
> +
>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>  
>  void kvm_timer_init_vhe(void);
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index b366bb2..9eec063 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
>  	vcpu_vtimer(vcpu)->active_cleared_last = false;
>  }
>  
> -static u64 kvm_phys_timer_read(void)
> +u64 kvm_phys_timer_read(void)
>  {
>  	return timecounter->cc->read(timecounter->cc);
>  }

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 00/10] Provide the EL1 physical timer to the VM
  2017-01-27  1:04 ` Jintack Lim
  (?)
@ 2017-01-29 15:55   ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:55 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, christoffer.dall, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

Hi Jintack,

On Fri, Jan 27 2017 at 01:04:50 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> The ARM architecture defines the EL1 physical timer and the virtual timer,
> and it is reasonable for an OS to expect to be able to access both.
> However, the current KVM implementation does not provide the EL1 physical
> timer to VMs but terminates VMs on access to the timer.
>
> This patch series enables VMs to use the EL1 physical timer through
> trap-and-emulate.  The KVM host emulates each EL1 physical timer register
> access and sets up the background timer accordingly.  When the background
> timer expires, the KVM host injects EL1 physical timer interrupts to the
> VM.  Alternatively, it's also possible to allow VMs to access the EL1
> physical timer without trapping.  However, this requires somehow using the
> EL2 physical timer for the Linux host while running the VM instead of the
> EL1 physical timer.  Right now I just implemented trap-and-emulate because
> this was straightforward to do, and I leave it to future work to determine
> if transferring the EL1 physical timer state to the EL2 timer provides any
> performance benefit.
>
> This feature will be useful for any OS that wishes to access the EL1
> physical timer. Nested virtualization is one of those use cases. A nested
> hypervisor running inside a VM would think it has full access to the
> hardware and naturally tries to use the EL1 physical timer as Linux would
> do. Other nested hypervisors may try to use the EL2 physical timer as Xen
> would do, but supporting the EL2 physical timer to the VM is out of scope
> of this patch series. This patch series will make it easy to add the EL2
> timer support in the future, though.
>
> Note that Linux VMs booting in EL1 will be unaffected by this patch series
> and will continue to use only the virtual timer and this patch series will
> therefore not introduce any performance degredation as a result of
> trap-and-emulate.

Thanks for respining this series. Overall, this looks quite good, and
the couple of comments I have should be easy to address.

My main concern is that we do expose a timer that doesn't hide
CNTVOFF. I appreciate that that was already the case, since CNTPCT was
always available (and this avoided trapping the counter), but maybe we
should have a way for userspace to ask for a mode where CNTPCT=CNTVCT,
byt trapping the physical counter and taking CNTVOFF in all physical
timer calculations.

I'm pretty sure you've addressed this one way or another in your nested
virt series, so maybe extracting the relevant patches and adding them on
top of this series could be an option?

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 00/10] Provide the EL1 physical timer to the VM
@ 2017-01-29 15:55   ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:55 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, catalin.marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, pbonzini, kvmarm

Hi Jintack,

On Fri, Jan 27 2017 at 01:04:50 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> The ARM architecture defines the EL1 physical timer and the virtual timer,
> and it is reasonable for an OS to expect to be able to access both.
> However, the current KVM implementation does not provide the EL1 physical
> timer to VMs but terminates VMs on access to the timer.
>
> This patch series enables VMs to use the EL1 physical timer through
> trap-and-emulate.  The KVM host emulates each EL1 physical timer register
> access and sets up the background timer accordingly.  When the background
> timer expires, the KVM host injects EL1 physical timer interrupts to the
> VM.  Alternatively, it's also possible to allow VMs to access the EL1
> physical timer without trapping.  However, this requires somehow using the
> EL2 physical timer for the Linux host while running the VM instead of the
> EL1 physical timer.  Right now I just implemented trap-and-emulate because
> this was straightforward to do, and I leave it to future work to determine
> if transferring the EL1 physical timer state to the EL2 timer provides any
> performance benefit.
>
> This feature will be useful for any OS that wishes to access the EL1
> physical timer. Nested virtualization is one of those use cases. A nested
> hypervisor running inside a VM would think it has full access to the
> hardware and naturally tries to use the EL1 physical timer as Linux would
> do. Other nested hypervisors may try to use the EL2 physical timer as Xen
> would do, but supporting the EL2 physical timer to the VM is out of scope
> of this patch series. This patch series will make it easy to add the EL2
> timer support in the future, though.
>
> Note that Linux VMs booting in EL1 will be unaffected by this patch series
> and will continue to use only the virtual timer and this patch series will
> therefore not introduce any performance degredation as a result of
> trap-and-emulate.

Thanks for respining this series. Overall, this looks quite good, and
the couple of comments I have should be easy to address.

My main concern is that we do expose a timer that doesn't hide
CNTVOFF. I appreciate that that was already the case, since CNTPCT was
always available (and this avoided trapping the counter), but maybe we
should have a way for userspace to ask for a mode where CNTPCT=CNTVCT,
byt trapping the physical counter and taking CNTVOFF in all physical
timer calculations.

I'm pretty sure you've addressed this one way or another in your nested
virt series, so maybe extracting the relevant patches and adding them on
top of this series could be an option?

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 00/10] Provide the EL1 physical timer to the VM
@ 2017-01-29 15:55   ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-29 15:55 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jintack,

On Fri, Jan 27 2017 at 01:04:50 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> The ARM architecture defines the EL1 physical timer and the virtual timer,
> and it is reasonable for an OS to expect to be able to access both.
> However, the current KVM implementation does not provide the EL1 physical
> timer to VMs but terminates VMs on access to the timer.
>
> This patch series enables VMs to use the EL1 physical timer through
> trap-and-emulate.  The KVM host emulates each EL1 physical timer register
> access and sets up the background timer accordingly.  When the background
> timer expires, the KVM host injects EL1 physical timer interrupts to the
> VM.  Alternatively, it's also possible to allow VMs to access the EL1
> physical timer without trapping.  However, this requires somehow using the
> EL2 physical timer for the Linux host while running the VM instead of the
> EL1 physical timer.  Right now I just implemented trap-and-emulate because
> this was straightforward to do, and I leave it to future work to determine
> if transferring the EL1 physical timer state to the EL2 timer provides any
> performance benefit.
>
> This feature will be useful for any OS that wishes to access the EL1
> physical timer. Nested virtualization is one of those use cases. A nested
> hypervisor running inside a VM would think it has full access to the
> hardware and naturally tries to use the EL1 physical timer as Linux would
> do. Other nested hypervisors may try to use the EL2 physical timer as Xen
> would do, but supporting the EL2 physical timer to the VM is out of scope
> of this patch series. This patch series will make it easy to add the EL2
> timer support in the future, though.
>
> Note that Linux VMs booting in EL1 will be unaffected by this patch series
> and will continue to use only the virtual timer and this patch series will
> therefore not introduce any performance degredation as a result of
> trap-and-emulate.

Thanks for respining this series. Overall, this looks quite good, and
the couple of comments I have should be easy to address.

My main concern is that we do expose a timer that doesn't hide
CNTVOFF. I appreciate that that was already the case, since CNTPCT was
always available (and this avoided trapping the counter), but maybe we
should have a way for userspace to ask for a mode where CNTPCT=CNTVCT,
byt trapping the physical counter and taking CNTVOFF in all physical
timer calculations.

I'm pretty sure you've addressed this one way or another in your nested
virt series, so maybe extracting the relevant patches and adding them on
top of this series could be an option?

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-29 11:54     ` Marc Zyngier
  (?)
@ 2017-01-30 14:45       ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:45 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Make cntvoff per each timer context. This is helpful to abstract kvm
> > timer functions to work with timer context without considering timer
> > types (e.g. physical timer or virtual timer).
> >
> > This also would pave the way for ever doing adjustments of the cntvoff
> > on a per-CPU basis if that should ever make sense.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  arch/arm/include/asm/kvm_host.h   |  6 +++---
> >  arch/arm64/include/asm/kvm_host.h |  4 ++--
> >  include/kvm/arm_arch_timer.h      |  8 +++-----
> >  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
> >  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
> >  5 files changed, 29 insertions(+), 18 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index d5423ab..f5456a9 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -60,9 +60,6 @@ struct kvm_arch {
> >  	/* The last vcpu id that ran on each physical CPU */
> >  	int __percpu *last_vcpu_ran;
> >  
> > -	/* Timer */
> > -	struct arch_timer_kvm	timer;
> > -
> >  	/*
> >  	 * Anything that is not used directly from assembly code goes
> >  	 * here.
> > @@ -75,6 +72,9 @@ struct kvm_arch {
> >  	/* Stage-2 page table */
> >  	pgd_t *pgd;
> >  
> > +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> > +	spinlock_t cntvoff_lock;
> 
> Is there any condition where we need this to be a spinlock? I would have
> thought that a mutex should have been enough, as this should only be
> updated on migration or initialization. Not that it matters much in this
> case, but I wondered if there is something I'm missing.
> 

I would think the critical section is small enough that a spinlock makes
sense, but what I don't think we need is to add the additional lock.

I think just taking the kvm->lock should be sufficient, which happens to
be a mutex, and while that may be a bit slower to take than the
spinlock, it's not in the critical path so let's just keep things
simple.

Perhaps this what Marc also meant.

> > +
> >  	/* Interrupt controller */
> >  	struct vgic_dist	vgic;
> >  	int max_vcpus;
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index e505038..23749a8 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -71,8 +71,8 @@ struct kvm_arch {
> >  	/* Interrupt controller */
> >  	struct vgic_dist	vgic;
> >  
> > -	/* Timer */
> > -	struct arch_timer_kvm	timer;
> > +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> > +	spinlock_t cntvoff_lock;
> >  };
> >  
> >  #define KVM_NR_MEM_OBJS     40
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index daad3c1..1b9c988 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -23,11 +23,6 @@
> >  #include <linux/hrtimer.h>
> >  #include <linux/workqueue.h>
> >  
> > -struct arch_timer_kvm {
> > -	/* Virtual offset */
> > -	u64			cntvoff;
> > -};
> > -
> >  struct arch_timer_context {
> >  	/* Registers: control register, timer value */
> >  	u32				cnt_ctl;
> > @@ -38,6 +33,9 @@ struct arch_timer_context {
> >  
> >  	/* Active IRQ state caching */
> >  	bool				active_cleared_last;
> > +
> > +	/* Virtual offset */
> > +	u64			cntvoff;
> >  };
> >  
> >  struct arch_timer_cpu {
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 6740efa..fa4c042 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
> >  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> >  {
> >  	u64 cval, now;
> > +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >  
> > -	cval = vcpu_vtimer(vcpu)->cnt_cval;
> > -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +	cval = vtimer->cnt_cval;
> > +	now = kvm_phys_timer_read() - vtimer->cntvoff;
> >  
> >  	if (now < cval) {
> >  		u64 ns;
> > @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> >  		return false;
> >  
> >  	cval = vtimer->cnt_cval;
> > -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +	now = kvm_phys_timer_read() - vtimer->cntvoff;
> >  
> >  	return cval <= now;
> >  }
> > @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	return 0;
> >  }
> >  
> > +/* Make the updates of cntvoff for all vtimer contexts atomic */
> > +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
> 
> Arguably, this acts on the VM itself and not a single vcpu. maybe you
> should consider passing the struct kvm pointer to reflect this.
> 
> > +{
> > +	int i;
> > +
> > +	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
> > +	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
> > +		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
> > +	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
> > +}
> > +
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >  {
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> > +	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> 
> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
> be welcome (this is not a change in semantics, but it was never obvious
> in the existing code).
> 
> > +
> >  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
> >  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
> >  	timer->timer.function = kvm_timer_expire;
> > @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
> >  		vtimer->cnt_ctl = value;
> >  		break;
> >  	case KVM_REG_ARM_TIMER_CNT:
> > -		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
> > +		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
> >  		break;
> >  	case KVM_REG_ARM_TIMER_CVAL:
> >  		vtimer->cnt_cval = value;
> > @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
> >  	case KVM_REG_ARM_TIMER_CTL:
> >  		return vtimer->cnt_ctl;
> >  	case KVM_REG_ARM_TIMER_CNT:
> > -		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +		return kvm_phys_timer_read() - vtimer->cntvoff;
> >  	case KVM_REG_ARM_TIMER_CVAL:
> >  		return vtimer->cnt_cval;
> >  	}
> > @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
> >  
> >  void kvm_timer_init(struct kvm *kvm)
> >  {
> > -	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> > +	spin_lock_init(&kvm->arch.cntvoff_lock);
> >  }
> >  
> >  /*
> > diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> > index 0cf0895..4734915 100644
> > --- a/virt/kvm/arm/hyp/timer-sr.c
> > +++ b/virt/kvm/arm/hyp/timer-sr.c
> > @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
> >  
> >  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> >  {
> > -	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >  	u64 val;
> > @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> >  	}
> >  
> >  	if (timer->enabled) {
> > -		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> > +		write_sysreg(vtimer->cntvoff, cntvoff_el2);
> >  		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
> >  		isb();
> >  		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
> 
I agree with the other two comments as well.

Otherwise looks ok.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 14:45       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:45 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Make cntvoff per each timer context. This is helpful to abstract kvm
> > timer functions to work with timer context without considering timer
> > types (e.g. physical timer or virtual timer).
> >
> > This also would pave the way for ever doing adjustments of the cntvoff
> > on a per-CPU basis if that should ever make sense.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  arch/arm/include/asm/kvm_host.h   |  6 +++---
> >  arch/arm64/include/asm/kvm_host.h |  4 ++--
> >  include/kvm/arm_arch_timer.h      |  8 +++-----
> >  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
> >  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
> >  5 files changed, 29 insertions(+), 18 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index d5423ab..f5456a9 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -60,9 +60,6 @@ struct kvm_arch {
> >  	/* The last vcpu id that ran on each physical CPU */
> >  	int __percpu *last_vcpu_ran;
> >  
> > -	/* Timer */
> > -	struct arch_timer_kvm	timer;
> > -
> >  	/*
> >  	 * Anything that is not used directly from assembly code goes
> >  	 * here.
> > @@ -75,6 +72,9 @@ struct kvm_arch {
> >  	/* Stage-2 page table */
> >  	pgd_t *pgd;
> >  
> > +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> > +	spinlock_t cntvoff_lock;
> 
> Is there any condition where we need this to be a spinlock? I would have
> thought that a mutex should have been enough, as this should only be
> updated on migration or initialization. Not that it matters much in this
> case, but I wondered if there is something I'm missing.
> 

I would think the critical section is small enough that a spinlock makes
sense, but what I don't think we need is to add the additional lock.

I think just taking the kvm->lock should be sufficient, which happens to
be a mutex, and while that may be a bit slower to take than the
spinlock, it's not in the critical path so let's just keep things
simple.

Perhaps this what Marc also meant.

> > +
> >  	/* Interrupt controller */
> >  	struct vgic_dist	vgic;
> >  	int max_vcpus;
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index e505038..23749a8 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -71,8 +71,8 @@ struct kvm_arch {
> >  	/* Interrupt controller */
> >  	struct vgic_dist	vgic;
> >  
> > -	/* Timer */
> > -	struct arch_timer_kvm	timer;
> > +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> > +	spinlock_t cntvoff_lock;
> >  };
> >  
> >  #define KVM_NR_MEM_OBJS     40
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index daad3c1..1b9c988 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -23,11 +23,6 @@
> >  #include <linux/hrtimer.h>
> >  #include <linux/workqueue.h>
> >  
> > -struct arch_timer_kvm {
> > -	/* Virtual offset */
> > -	u64			cntvoff;
> > -};
> > -
> >  struct arch_timer_context {
> >  	/* Registers: control register, timer value */
> >  	u32				cnt_ctl;
> > @@ -38,6 +33,9 @@ struct arch_timer_context {
> >  
> >  	/* Active IRQ state caching */
> >  	bool				active_cleared_last;
> > +
> > +	/* Virtual offset */
> > +	u64			cntvoff;
> >  };
> >  
> >  struct arch_timer_cpu {
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 6740efa..fa4c042 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
> >  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> >  {
> >  	u64 cval, now;
> > +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >  
> > -	cval = vcpu_vtimer(vcpu)->cnt_cval;
> > -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +	cval = vtimer->cnt_cval;
> > +	now = kvm_phys_timer_read() - vtimer->cntvoff;
> >  
> >  	if (now < cval) {
> >  		u64 ns;
> > @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> >  		return false;
> >  
> >  	cval = vtimer->cnt_cval;
> > -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +	now = kvm_phys_timer_read() - vtimer->cntvoff;
> >  
> >  	return cval <= now;
> >  }
> > @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	return 0;
> >  }
> >  
> > +/* Make the updates of cntvoff for all vtimer contexts atomic */
> > +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
> 
> Arguably, this acts on the VM itself and not a single vcpu. maybe you
> should consider passing the struct kvm pointer to reflect this.
> 
> > +{
> > +	int i;
> > +
> > +	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
> > +	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
> > +		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
> > +	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
> > +}
> > +
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >  {
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> > +	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> 
> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
> be welcome (this is not a change in semantics, but it was never obvious
> in the existing code).
> 
> > +
> >  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
> >  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
> >  	timer->timer.function = kvm_timer_expire;
> > @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
> >  		vtimer->cnt_ctl = value;
> >  		break;
> >  	case KVM_REG_ARM_TIMER_CNT:
> > -		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
> > +		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
> >  		break;
> >  	case KVM_REG_ARM_TIMER_CVAL:
> >  		vtimer->cnt_cval = value;
> > @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
> >  	case KVM_REG_ARM_TIMER_CTL:
> >  		return vtimer->cnt_ctl;
> >  	case KVM_REG_ARM_TIMER_CNT:
> > -		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +		return kvm_phys_timer_read() - vtimer->cntvoff;
> >  	case KVM_REG_ARM_TIMER_CVAL:
> >  		return vtimer->cnt_cval;
> >  	}
> > @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
> >  
> >  void kvm_timer_init(struct kvm *kvm)
> >  {
> > -	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> > +	spin_lock_init(&kvm->arch.cntvoff_lock);
> >  }
> >  
> >  /*
> > diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> > index 0cf0895..4734915 100644
> > --- a/virt/kvm/arm/hyp/timer-sr.c
> > +++ b/virt/kvm/arm/hyp/timer-sr.c
> > @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
> >  
> >  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> >  {
> > -	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >  	u64 val;
> > @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> >  	}
> >  
> >  	if (timer->enabled) {
> > -		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> > +		write_sysreg(vtimer->cntvoff, cntvoff_el2);
> >  		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
> >  		isb();
> >  		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
> 
I agree with the other two comments as well.

Otherwise looks ok.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 14:45       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Make cntvoff per each timer context. This is helpful to abstract kvm
> > timer functions to work with timer context without considering timer
> > types (e.g. physical timer or virtual timer).
> >
> > This also would pave the way for ever doing adjustments of the cntvoff
> > on a per-CPU basis if that should ever make sense.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  arch/arm/include/asm/kvm_host.h   |  6 +++---
> >  arch/arm64/include/asm/kvm_host.h |  4 ++--
> >  include/kvm/arm_arch_timer.h      |  8 +++-----
> >  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
> >  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
> >  5 files changed, 29 insertions(+), 18 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index d5423ab..f5456a9 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -60,9 +60,6 @@ struct kvm_arch {
> >  	/* The last vcpu id that ran on each physical CPU */
> >  	int __percpu *last_vcpu_ran;
> >  
> > -	/* Timer */
> > -	struct arch_timer_kvm	timer;
> > -
> >  	/*
> >  	 * Anything that is not used directly from assembly code goes
> >  	 * here.
> > @@ -75,6 +72,9 @@ struct kvm_arch {
> >  	/* Stage-2 page table */
> >  	pgd_t *pgd;
> >  
> > +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> > +	spinlock_t cntvoff_lock;
> 
> Is there any condition where we need this to be a spinlock? I would have
> thought that a mutex should have been enough, as this should only be
> updated on migration or initialization. Not that it matters much in this
> case, but I wondered if there is something I'm missing.
> 

I would think the critical section is small enough that a spinlock makes
sense, but what I don't think we need is to add the additional lock.

I think just taking the kvm->lock should be sufficient, which happens to
be a mutex, and while that may be a bit slower to take than the
spinlock, it's not in the critical path so let's just keep things
simple.

Perhaps this what Marc also meant.

> > +
> >  	/* Interrupt controller */
> >  	struct vgic_dist	vgic;
> >  	int max_vcpus;
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index e505038..23749a8 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -71,8 +71,8 @@ struct kvm_arch {
> >  	/* Interrupt controller */
> >  	struct vgic_dist	vgic;
> >  
> > -	/* Timer */
> > -	struct arch_timer_kvm	timer;
> > +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
> > +	spinlock_t cntvoff_lock;
> >  };
> >  
> >  #define KVM_NR_MEM_OBJS     40
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index daad3c1..1b9c988 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -23,11 +23,6 @@
> >  #include <linux/hrtimer.h>
> >  #include <linux/workqueue.h>
> >  
> > -struct arch_timer_kvm {
> > -	/* Virtual offset */
> > -	u64			cntvoff;
> > -};
> > -
> >  struct arch_timer_context {
> >  	/* Registers: control register, timer value */
> >  	u32				cnt_ctl;
> > @@ -38,6 +33,9 @@ struct arch_timer_context {
> >  
> >  	/* Active IRQ state caching */
> >  	bool				active_cleared_last;
> > +
> > +	/* Virtual offset */
> > +	u64			cntvoff;
> >  };
> >  
> >  struct arch_timer_cpu {
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 6740efa..fa4c042 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
> >  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> >  {
> >  	u64 cval, now;
> > +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >  
> > -	cval = vcpu_vtimer(vcpu)->cnt_cval;
> > -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +	cval = vtimer->cnt_cval;
> > +	now = kvm_phys_timer_read() - vtimer->cntvoff;
> >  
> >  	if (now < cval) {
> >  		u64 ns;
> > @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> >  		return false;
> >  
> >  	cval = vtimer->cnt_cval;
> > -	now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +	now = kvm_phys_timer_read() - vtimer->cntvoff;
> >  
> >  	return cval <= now;
> >  }
> > @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	return 0;
> >  }
> >  
> > +/* Make the updates of cntvoff for all vtimer contexts atomic */
> > +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
> 
> Arguably, this acts on the VM itself and not a single vcpu. maybe you
> should consider passing the struct kvm pointer to reflect this.
> 
> > +{
> > +	int i;
> > +
> > +	spin_lock(&vcpu->kvm->arch.cntvoff_lock);
> > +	kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
> > +		vcpu_vtimer(vcpu)->cntvoff = cntvoff;
> > +	spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
> > +}
> > +
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >  {
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> > +	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> 
> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
> be welcome (this is not a change in semantics, but it was never obvious
> in the existing code).
> 
> > +
> >  	INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
> >  	hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
> >  	timer->timer.function = kvm_timer_expire;
> > @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
> >  		vtimer->cnt_ctl = value;
> >  		break;
> >  	case KVM_REG_ARM_TIMER_CNT:
> > -		vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
> > +		update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
> >  		break;
> >  	case KVM_REG_ARM_TIMER_CVAL:
> >  		vtimer->cnt_cval = value;
> > @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
> >  	case KVM_REG_ARM_TIMER_CTL:
> >  		return vtimer->cnt_ctl;
> >  	case KVM_REG_ARM_TIMER_CNT:
> > -		return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
> > +		return kvm_phys_timer_read() - vtimer->cntvoff;
> >  	case KVM_REG_ARM_TIMER_CVAL:
> >  		return vtimer->cnt_cval;
> >  	}
> > @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
> >  
> >  void kvm_timer_init(struct kvm *kvm)
> >  {
> > -	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
> > +	spin_lock_init(&kvm->arch.cntvoff_lock);
> >  }
> >  
> >  /*
> > diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> > index 0cf0895..4734915 100644
> > --- a/virt/kvm/arm/hyp/timer-sr.c
> > +++ b/virt/kvm/arm/hyp/timer-sr.c
> > @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
> >  
> >  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> >  {
> > -	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >  	u64 val;
> > @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> >  	}
> >  
> >  	if (timer->enabled) {
> > -		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> > +		write_sysreg(vtimer->cntvoff, cntvoff_el2);
> >  		write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
> >  		isb();
> >  		write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
> 
I agree with the other two comments as well.

Otherwise looks ok.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  2017-01-27  1:04   ` Jintack Lim
  (?)
@ 2017-01-30 14:49     ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:49 UTC (permalink / raw)
  To: Jintack Lim
  Cc: pbonzini, rkrcmar, marc.zyngier, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Thu, Jan 26, 2017 at 08:04:53PM -0500, Jintack Lim wrote:
> Now that we have a separate structure for timer context, make functions
> general so that they can work with any timer context, not just the
> virtual timer context.  This does not change the virtual timer
> functionality.
> 
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/arm.c           |  2 +-
>  include/kvm/arm_arch_timer.h |  3 ++-
>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>  3 files changed, 30 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9d74464..9a34a3c 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_timer_should_fire(vcpu);
> +	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>  }
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 1b9c988..d921d20 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx);
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index fa4c042..f72005a 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
> +				   struct arch_timer_context *timer_ctx)

do you need the vcpu parameter here?

>  {
>  	u64 cval, now;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	 * PoV (NTP on the host may have forced it to expire
>  	 * early). If we should have slept longer, restart it.
>  	 */
> -	ns = kvm_timer_compute_delta(vcpu);
> +	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>  	if (unlikely(ns)) {
>  		hrtimer_forward_now(hrt, ns_to_ktime(ns));
>  		return HRTIMER_RESTART;
> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	return HRTIMER_NORESTART;
>  }
>  
> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> -
> -	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> -		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
> +	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> +		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>  }
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx)

do you need the vcpu parameter here?

>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 cval, now;
>  
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(timer_ctx))
>  		return false;
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	return cval <= now;
>  }
>  
> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> +					struct arch_timer_context *timer_ctx)
>  {
>  	int ret;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(!vgic_initialized(vcpu->kvm));
>  
> -	vtimer->active_cleared_last = false;
> -	vtimer->irq.level = new_level;
> -	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
> -				   vtimer->irq.level);
> +	timer_ctx->active_cleared_last = false;
> +	timer_ctx->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
> +				   timer_ctx->irq.level);
>  	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> -					 vtimer->irq.irq,
> -					 vtimer->irq.level);
> +					 timer_ctx->irq.irq,
> +					 timer_ctx->irq.level);
>  	WARN_ON(ret);
>  }
>  
> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>  		return -ENODEV;
>  
> -	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
> -		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
> +	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
> +		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
>  	return 0;
>  }
> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(timer_is_armed(timer));
>  
> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  	 * already expired, because kvm_vcpu_block will return before putting
>  	 * the thread to sleep.
>  	 */
> -	if (kvm_timer_should_fire(vcpu))
> +	if (kvm_timer_should_fire(vcpu, vtimer))
>  		return;
>  
>  	/*
>  	 * If the timer is not capable of raising interrupts (disabled or
>  	 * masked), then there's no more work for us to do.
>  	 */
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(vtimer))
>  		return;
>  
>  	/*  The timer has not yet expired, schedule a background timer */
> -	timer_arm(timer, kvm_timer_compute_delta(vcpu));
> +	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>  }
>  
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
> -- 
> 1.9.1
> 
> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-30 14:49     ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:49 UTC (permalink / raw)
  To: Jintack Lim
  Cc: kvm, marc.zyngier, catalin.marinas, will.deacon, linux,
	linux-kernel, andre.przywara, pbonzini, kvmarm, linux-arm-kernel

On Thu, Jan 26, 2017 at 08:04:53PM -0500, Jintack Lim wrote:
> Now that we have a separate structure for timer context, make functions
> general so that they can work with any timer context, not just the
> virtual timer context.  This does not change the virtual timer
> functionality.
> 
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/arm.c           |  2 +-
>  include/kvm/arm_arch_timer.h |  3 ++-
>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>  3 files changed, 30 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9d74464..9a34a3c 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_timer_should_fire(vcpu);
> +	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>  }
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 1b9c988..d921d20 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx);
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index fa4c042..f72005a 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
> +				   struct arch_timer_context *timer_ctx)

do you need the vcpu parameter here?

>  {
>  	u64 cval, now;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	 * PoV (NTP on the host may have forced it to expire
>  	 * early). If we should have slept longer, restart it.
>  	 */
> -	ns = kvm_timer_compute_delta(vcpu);
> +	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>  	if (unlikely(ns)) {
>  		hrtimer_forward_now(hrt, ns_to_ktime(ns));
>  		return HRTIMER_RESTART;
> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	return HRTIMER_NORESTART;
>  }
>  
> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> -
> -	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> -		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
> +	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> +		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>  }
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx)

do you need the vcpu parameter here?

>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 cval, now;
>  
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(timer_ctx))
>  		return false;
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	return cval <= now;
>  }
>  
> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> +					struct arch_timer_context *timer_ctx)
>  {
>  	int ret;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(!vgic_initialized(vcpu->kvm));
>  
> -	vtimer->active_cleared_last = false;
> -	vtimer->irq.level = new_level;
> -	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
> -				   vtimer->irq.level);
> +	timer_ctx->active_cleared_last = false;
> +	timer_ctx->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
> +				   timer_ctx->irq.level);
>  	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> -					 vtimer->irq.irq,
> -					 vtimer->irq.level);
> +					 timer_ctx->irq.irq,
> +					 timer_ctx->irq.level);
>  	WARN_ON(ret);
>  }
>  
> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>  		return -ENODEV;
>  
> -	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
> -		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
> +	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
> +		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
>  	return 0;
>  }
> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(timer_is_armed(timer));
>  
> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  	 * already expired, because kvm_vcpu_block will return before putting
>  	 * the thread to sleep.
>  	 */
> -	if (kvm_timer_should_fire(vcpu))
> +	if (kvm_timer_should_fire(vcpu, vtimer))
>  		return;
>  
>  	/*
>  	 * If the timer is not capable of raising interrupts (disabled or
>  	 * masked), then there's no more work for us to do.
>  	 */
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(vtimer))
>  		return;
>  
>  	/*  The timer has not yet expired, schedule a background timer */
> -	timer_arm(timer, kvm_timer_compute_delta(vcpu));
> +	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>  }
>  
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
> -- 
> 1.9.1
> 
> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-30 14:49     ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 26, 2017 at 08:04:53PM -0500, Jintack Lim wrote:
> Now that we have a separate structure for timer context, make functions
> general so that they can work with any timer context, not just the
> virtual timer context.  This does not change the virtual timer
> functionality.
> 
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
>  arch/arm/kvm/arm.c           |  2 +-
>  include/kvm/arm_arch_timer.h |  3 ++-
>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>  3 files changed, 30 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9d74464..9a34a3c 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_timer_should_fire(vcpu);
> +	return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>  }
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index 1b9c988..d921d20 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx);
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index fa4c042..f72005a 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
> +				   struct arch_timer_context *timer_ctx)

do you need the vcpu parameter here?

>  {
>  	u64 cval, now;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	if (now < cval) {
>  		u64 ns;
> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	 * PoV (NTP on the host may have forced it to expire
>  	 * early). If we should have slept longer, restart it.
>  	 */
> -	ns = kvm_timer_compute_delta(vcpu);
> +	ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>  	if (unlikely(ns)) {
>  		hrtimer_forward_now(hrt, ns_to_ktime(ns));
>  		return HRTIMER_RESTART;
> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>  	return HRTIMER_NORESTART;
>  }
>  
> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> -
> -	return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> -		(vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
> +	return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
> +		(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>  }
>  
> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
> +			   struct arch_timer_context *timer_ctx)

do you need the vcpu parameter here?

>  {
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  	u64 cval, now;
>  
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(timer_ctx))
>  		return false;
>  
> -	cval = vtimer->cnt_cval;
> -	now = kvm_phys_timer_read() - vtimer->cntvoff;
> +	cval = timer_ctx->cnt_cval;
> +	now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>  
>  	return cval <= now;
>  }
>  
> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> +					struct arch_timer_context *timer_ctx)
>  {
>  	int ret;
> -	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(!vgic_initialized(vcpu->kvm));
>  
> -	vtimer->active_cleared_last = false;
> -	vtimer->irq.level = new_level;
> -	trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
> -				   vtimer->irq.level);
> +	timer_ctx->active_cleared_last = false;
> +	timer_ctx->irq.level = new_level;
> +	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
> +				   timer_ctx->irq.level);
>  	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> -					 vtimer->irq.irq,
> -					 vtimer->irq.level);
> +					 timer_ctx->irq.irq,
> +					 timer_ctx->irq.level);
>  	WARN_ON(ret);
>  }
>  
> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  	if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>  		return -ENODEV;
>  
> -	if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
> -		kvm_timer_update_irq(vcpu, !vtimer->irq.level);
> +	if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
> +		kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>  
>  	return 0;
>  }
> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  {
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> +	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>  
>  	BUG_ON(timer_is_armed(timer));
>  
> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>  	 * already expired, because kvm_vcpu_block will return before putting
>  	 * the thread to sleep.
>  	 */
> -	if (kvm_timer_should_fire(vcpu))
> +	if (kvm_timer_should_fire(vcpu, vtimer))
>  		return;
>  
>  	/*
>  	 * If the timer is not capable of raising interrupts (disabled or
>  	 * masked), then there's no more work for us to do.
>  	 */
> -	if (!kvm_timer_irq_can_fire(vcpu))
> +	if (!kvm_timer_irq_can_fire(vtimer))
>  		return;
>  
>  	/*  The timer has not yet expired, schedule a background timer */
> -	timer_arm(timer, kvm_timer_compute_delta(vcpu));
> +	timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>  }
>  
>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
> -- 
> 1.9.1
> 
> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-30 14:45       ` Christoffer Dall
  (?)
@ 2017-01-30 14:51         ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 14:51 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On 30/01/17 14:45, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>> timer functions to work with timer context without considering timer
>>> types (e.g. physical timer or virtual timer).
>>>
>>> This also would pave the way for ever doing adjustments of the cntvoff
>>> on a per-CPU basis if that should ever make sense.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index d5423ab..f5456a9 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>  	/* The last vcpu id that ran on each physical CPU */
>>>  	int __percpu *last_vcpu_ran;
>>>  
>>> -	/* Timer */
>>> -	struct arch_timer_kvm	timer;
>>> -
>>>  	/*
>>>  	 * Anything that is not used directly from assembly code goes
>>>  	 * here.
>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>  	/* Stage-2 page table */
>>>  	pgd_t *pgd;
>>>  
>>> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +	spinlock_t cntvoff_lock;
>>
>> Is there any condition where we need this to be a spinlock? I would have
>> thought that a mutex should have been enough, as this should only be
>> updated on migration or initialization. Not that it matters much in this
>> case, but I wondered if there is something I'm missing.
>>
> 
> I would think the critical section is small enough that a spinlock makes
> sense, but what I don't think we need is to add the additional lock.
> 
> I think just taking the kvm->lock should be sufficient, which happens to
> be a mutex, and while that may be a bit slower to take than the
> spinlock, it's not in the critical path so let's just keep things
> simple.
> 
> Perhaps this what Marc also meant.

That would be the logical conclusion, assuming that we can sleep on this
path.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 14:51         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 14:51 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On 30/01/17 14:45, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>> timer functions to work with timer context without considering timer
>>> types (e.g. physical timer or virtual timer).
>>>
>>> This also would pave the way for ever doing adjustments of the cntvoff
>>> on a per-CPU basis if that should ever make sense.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index d5423ab..f5456a9 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>  	/* The last vcpu id that ran on each physical CPU */
>>>  	int __percpu *last_vcpu_ran;
>>>  
>>> -	/* Timer */
>>> -	struct arch_timer_kvm	timer;
>>> -
>>>  	/*
>>>  	 * Anything that is not used directly from assembly code goes
>>>  	 * here.
>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>  	/* Stage-2 page table */
>>>  	pgd_t *pgd;
>>>  
>>> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +	spinlock_t cntvoff_lock;
>>
>> Is there any condition where we need this to be a spinlock? I would have
>> thought that a mutex should have been enough, as this should only be
>> updated on migration or initialization. Not that it matters much in this
>> case, but I wondered if there is something I'm missing.
>>
> 
> I would think the critical section is small enough that a spinlock makes
> sense, but what I don't think we need is to add the additional lock.
> 
> I think just taking the kvm->lock should be sufficient, which happens to
> be a mutex, and while that may be a bit slower to take than the
> spinlock, it's not in the critical path so let's just keep things
> simple.
> 
> Perhaps this what Marc also meant.

That would be the logical conclusion, assuming that we can sleep on this
path.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 14:51         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 14:51 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 14:45, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>> timer functions to work with timer context without considering timer
>>> types (e.g. physical timer or virtual timer).
>>>
>>> This also would pave the way for ever doing adjustments of the cntvoff
>>> on a per-CPU basis if that should ever make sense.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index d5423ab..f5456a9 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>  	/* The last vcpu id that ran on each physical CPU */
>>>  	int __percpu *last_vcpu_ran;
>>>  
>>> -	/* Timer */
>>> -	struct arch_timer_kvm	timer;
>>> -
>>>  	/*
>>>  	 * Anything that is not used directly from assembly code goes
>>>  	 * here.
>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>  	/* Stage-2 page table */
>>>  	pgd_t *pgd;
>>>  
>>> +	/* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +	spinlock_t cntvoff_lock;
>>
>> Is there any condition where we need this to be a spinlock? I would have
>> thought that a mutex should have been enough, as this should only be
>> updated on migration or initialization. Not that it matters much in this
>> case, but I wondered if there is something I'm missing.
>>
> 
> I would think the critical section is small enough that a spinlock makes
> sense, but what I don't think we need is to add the additional lock.
> 
> I think just taking the kvm->lock should be sufficient, which happens to
> be a mutex, and while that may be a bit slower to take than the
> spinlock, it's not in the critical path so let's just keep things
> simple.
> 
> Perhaps this what Marc also meant.

That would be the logical conclusion, assuming that we can sleep on this
path.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
  2017-01-29 12:07     ` Marc Zyngier
  (?)
@ 2017-01-30 14:58       ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel, Peter Maydell

On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Initialize the emulated EL1 physical timer with the default irq number.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  arch/arm/kvm/reset.c         | 9 ++++++++-
> >  arch/arm64/kvm/reset.c       | 9 ++++++++-
> >  include/kvm/arm_arch_timer.h | 3 ++-
> >  virt/kvm/arm/arch_timer.c    | 9 +++++++--
> >  4 files changed, 25 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> > index 4b5e802..1da8b2d 100644
> > --- a/arch/arm/kvm/reset.c
> > +++ b/arch/arm/kvm/reset.c
> > @@ -37,6 +37,11 @@
> >  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
> >  };
> >  
> > +static const struct kvm_irq_level cortexa_ptimer_irq = {
> > +	{ .irq = 30 },
> > +	.level = 1,
> > +};
> 
> At some point, we'll have to make that discoverable/configurable. Maybe
> the time for a discoverable arch timer has come (see below).
> 
> > +
> >  static const struct kvm_irq_level cortexa_vtimer_irq = {
> >  	{ .irq = 27 },
> >  	.level = 1,
> > @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >  	struct kvm_regs *reset_regs;
> >  	const struct kvm_irq_level *cpu_vtimer_irq;
> > +	const struct kvm_irq_level *cpu_ptimer_irq;
> >  
> >  	switch (vcpu->arch.target) {
> >  	case KVM_ARM_TARGET_CORTEX_A7:
> > @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  		reset_regs = &cortexa_regs_reset;
> >  		vcpu->arch.midr = read_cpuid_id();
> >  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> > +		cpu_ptimer_irq = &cortexa_ptimer_irq;
> >  		break;
> >  	default:
> >  		return -ENODEV;
> > @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  	kvm_reset_coprocs(vcpu);
> >  
> >  	/* Reset arch_timer context */
> > -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> > +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >  }
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index e95d4f6..d9e9697 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -46,6 +46,11 @@
> >  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
> >  };
> >  
> > +static const struct kvm_irq_level default_ptimer_irq = {
> > +	.irq	= 30,
> > +	.level	= 1,
> > +};
> > +
> >  static const struct kvm_irq_level default_vtimer_irq = {
> >  	.irq	= 27,
> >  	.level	= 1,
> > @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >  	const struct kvm_irq_level *cpu_vtimer_irq;
> > +	const struct kvm_irq_level *cpu_ptimer_irq;
> >  	const struct kvm_regs *cpu_reset;
> >  
> >  	switch (vcpu->arch.target) {
> > @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  		}
> >  
> >  		cpu_vtimer_irq = &default_vtimer_irq;
> > +		cpu_ptimer_irq = &default_ptimer_irq;
> >  		break;
> >  	}
> >  
> > @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  	kvm_pmu_vcpu_reset(vcpu);
> >  
> >  	/* Reset timer */
> > -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> > +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >  }
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index 69f648b..a364593 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -59,7 +59,8 @@ struct arch_timer_cpu {
> >  int kvm_timer_enable(struct kvm_vcpu *vcpu);
> >  void kvm_timer_init(struct kvm *kvm);
> >  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> > -			 const struct kvm_irq_level *irq);
> > +			 const struct kvm_irq_level *virt_irq,
> > +			 const struct kvm_irq_level *phys_irq);
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> >  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
> >  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index f72005a..0f6e935 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
> >  }
> >  
> >  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> > -			 const struct kvm_irq_level *irq)
> > +			 const struct kvm_irq_level *virt_irq,
> > +			 const struct kvm_irq_level *phys_irq)
> >  {
> >  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> > +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> >  
> >  	/*
> >  	 * The vcpu timer irq number cannot be determined in
> > @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	 * kvm_vcpu_set_target(). To handle this, we determine
> >  	 * vcpu timer irq number when the vcpu is reset.
> >  	 */
> > -	vtimer->irq.irq = irq->irq;
> > +	vtimer->irq.irq = virt_irq->irq;
> > +	ptimer->irq.irq = phys_irq->irq;
> >  
> >  	/*
> >  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> > @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	 * the ARMv7 architecture.
> >  	 */
> >  	vtimer->cnt_ctl = 0;
> > +	ptimer->cnt_ctl = 0;
> >  	kvm_timer_update_state(vcpu);
> >  
> >  	return 0;
> > @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> >  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> > +	vcpu_ptimer(vcpu)->cntvoff = 0;
> 
> This is quite contentious, IMHO. Do we really want to expose the delta
> between the virtual and physical counters? That's a clear indication to
> the guest that it is virtualized. I"m not sure it matters, but I think
> we should at least make a conscious choice, and maybe give the
> opportunity to userspace to select the desired behaviour.
> 

So my understanding of the architecture is that you should always have
two timer/counter pairs available at EL1.  They may be synchronized, and
they may not.  If you want an accurate reading of wall clock time, look
at the physical counter, and that can generally be expected to be fast,
precise, and syncrhonized (on working hardware, of course).

Now, there can be a constant or potentially monotonically increasing
offset between the physial and virtual counters, which may mean you're
running under a hypervisor or (in the first case) that firmware
programmed or neglected to program cntvoff.  I don't think it's an
inherent problem to expose that difference to a guest, and I think it's
more important that reading the physical counter is fast and doesn't
trap.

The question is which contract we can have with a guest OS, and which
legacy we have to keep supporting (Linux, UEFI, ?).

Probably Linux should keep relying on the virtual counter/timer only,
unless something is advertised in DT/ACPI, about it being able to use
the physical timer/counter pair, even when booted at EL1.  We could
explore the opportunities to build on that to let the guest figure
out stolen time by reading the two counters and by programming the
proper timer depending on the desired semantics (i.e. virtual or
physical time).

In terms of this patch, I actually think it's fine, but we may need to
build something more on top later.  It is possible, though, that I'm
entirely missing the point about Linux timekeeping infrastructure and
that my reading of the architecture is bogus.

What do you think?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-30 14:58       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Initialize the emulated EL1 physical timer with the default irq number.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  arch/arm/kvm/reset.c         | 9 ++++++++-
> >  arch/arm64/kvm/reset.c       | 9 ++++++++-
> >  include/kvm/arm_arch_timer.h | 3 ++-
> >  virt/kvm/arm/arch_timer.c    | 9 +++++++--
> >  4 files changed, 25 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> > index 4b5e802..1da8b2d 100644
> > --- a/arch/arm/kvm/reset.c
> > +++ b/arch/arm/kvm/reset.c
> > @@ -37,6 +37,11 @@
> >  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
> >  };
> >  
> > +static const struct kvm_irq_level cortexa_ptimer_irq = {
> > +	{ .irq = 30 },
> > +	.level = 1,
> > +};
> 
> At some point, we'll have to make that discoverable/configurable. Maybe
> the time for a discoverable arch timer has come (see below).
> 
> > +
> >  static const struct kvm_irq_level cortexa_vtimer_irq = {
> >  	{ .irq = 27 },
> >  	.level = 1,
> > @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >  	struct kvm_regs *reset_regs;
> >  	const struct kvm_irq_level *cpu_vtimer_irq;
> > +	const struct kvm_irq_level *cpu_ptimer_irq;
> >  
> >  	switch (vcpu->arch.target) {
> >  	case KVM_ARM_TARGET_CORTEX_A7:
> > @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  		reset_regs = &cortexa_regs_reset;
> >  		vcpu->arch.midr = read_cpuid_id();
> >  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> > +		cpu_ptimer_irq = &cortexa_ptimer_irq;
> >  		break;
> >  	default:
> >  		return -ENODEV;
> > @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  	kvm_reset_coprocs(vcpu);
> >  
> >  	/* Reset arch_timer context */
> > -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> > +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >  }
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index e95d4f6..d9e9697 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -46,6 +46,11 @@
> >  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
> >  };
> >  
> > +static const struct kvm_irq_level default_ptimer_irq = {
> > +	.irq	= 30,
> > +	.level	= 1,
> > +};
> > +
> >  static const struct kvm_irq_level default_vtimer_irq = {
> >  	.irq	= 27,
> >  	.level	= 1,
> > @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >  	const struct kvm_irq_level *cpu_vtimer_irq;
> > +	const struct kvm_irq_level *cpu_ptimer_irq;
> >  	const struct kvm_regs *cpu_reset;
> >  
> >  	switch (vcpu->arch.target) {
> > @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  		}
> >  
> >  		cpu_vtimer_irq = &default_vtimer_irq;
> > +		cpu_ptimer_irq = &default_ptimer_irq;
> >  		break;
> >  	}
> >  
> > @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  	kvm_pmu_vcpu_reset(vcpu);
> >  
> >  	/* Reset timer */
> > -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> > +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >  }
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index 69f648b..a364593 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -59,7 +59,8 @@ struct arch_timer_cpu {
> >  int kvm_timer_enable(struct kvm_vcpu *vcpu);
> >  void kvm_timer_init(struct kvm *kvm);
> >  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> > -			 const struct kvm_irq_level *irq);
> > +			 const struct kvm_irq_level *virt_irq,
> > +			 const struct kvm_irq_level *phys_irq);
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> >  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
> >  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index f72005a..0f6e935 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
> >  }
> >  
> >  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> > -			 const struct kvm_irq_level *irq)
> > +			 const struct kvm_irq_level *virt_irq,
> > +			 const struct kvm_irq_level *phys_irq)
> >  {
> >  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> > +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> >  
> >  	/*
> >  	 * The vcpu timer irq number cannot be determined in
> > @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	 * kvm_vcpu_set_target(). To handle this, we determine
> >  	 * vcpu timer irq number when the vcpu is reset.
> >  	 */
> > -	vtimer->irq.irq = irq->irq;
> > +	vtimer->irq.irq = virt_irq->irq;
> > +	ptimer->irq.irq = phys_irq->irq;
> >  
> >  	/*
> >  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> > @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	 * the ARMv7 architecture.
> >  	 */
> >  	vtimer->cnt_ctl = 0;
> > +	ptimer->cnt_ctl = 0;
> >  	kvm_timer_update_state(vcpu);
> >  
> >  	return 0;
> > @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> >  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> > +	vcpu_ptimer(vcpu)->cntvoff = 0;
> 
> This is quite contentious, IMHO. Do we really want to expose the delta
> between the virtual and physical counters? That's a clear indication to
> the guest that it is virtualized. I"m not sure it matters, but I think
> we should at least make a conscious choice, and maybe give the
> opportunity to userspace to select the desired behaviour.
> 

So my understanding of the architecture is that you should always have
two timer/counter pairs available at EL1.  They may be synchronized, and
they may not.  If you want an accurate reading of wall clock time, look
at the physical counter, and that can generally be expected to be fast,
precise, and syncrhonized (on working hardware, of course).

Now, there can be a constant or potentially monotonically increasing
offset between the physial and virtual counters, which may mean you're
running under a hypervisor or (in the first case) that firmware
programmed or neglected to program cntvoff.  I don't think it's an
inherent problem to expose that difference to a guest, and I think it's
more important that reading the physical counter is fast and doesn't
trap.

The question is which contract we can have with a guest OS, and which
legacy we have to keep supporting (Linux, UEFI, ?).

Probably Linux should keep relying on the virtual counter/timer only,
unless something is advertised in DT/ACPI, about it being able to use
the physical timer/counter pair, even when booted at EL1.  We could
explore the opportunities to build on that to let the guest figure
out stolen time by reading the two counters and by programming the
proper timer depending on the desired semantics (i.e. virtual or
physical time).

In terms of this patch, I actually think it's fine, but we may need to
build something more on top later.  It is possible, though, that I'm
entirely missing the point about Linux timekeeping infrastructure and
that my reading of the architecture is bogus.

What do you think?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-30 14:58       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 14:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Initialize the emulated EL1 physical timer with the default irq number.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  arch/arm/kvm/reset.c         | 9 ++++++++-
> >  arch/arm64/kvm/reset.c       | 9 ++++++++-
> >  include/kvm/arm_arch_timer.h | 3 ++-
> >  virt/kvm/arm/arch_timer.c    | 9 +++++++--
> >  4 files changed, 25 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> > index 4b5e802..1da8b2d 100644
> > --- a/arch/arm/kvm/reset.c
> > +++ b/arch/arm/kvm/reset.c
> > @@ -37,6 +37,11 @@
> >  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
> >  };
> >  
> > +static const struct kvm_irq_level cortexa_ptimer_irq = {
> > +	{ .irq = 30 },
> > +	.level = 1,
> > +};
> 
> At some point, we'll have to make that discoverable/configurable. Maybe
> the time for a discoverable arch timer has come (see below).
> 
> > +
> >  static const struct kvm_irq_level cortexa_vtimer_irq = {
> >  	{ .irq = 27 },
> >  	.level = 1,
> > @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >  	struct kvm_regs *reset_regs;
> >  	const struct kvm_irq_level *cpu_vtimer_irq;
> > +	const struct kvm_irq_level *cpu_ptimer_irq;
> >  
> >  	switch (vcpu->arch.target) {
> >  	case KVM_ARM_TARGET_CORTEX_A7:
> > @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  		reset_regs = &cortexa_regs_reset;
> >  		vcpu->arch.midr = read_cpuid_id();
> >  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> > +		cpu_ptimer_irq = &cortexa_ptimer_irq;
> >  		break;
> >  	default:
> >  		return -ENODEV;
> > @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  	kvm_reset_coprocs(vcpu);
> >  
> >  	/* Reset arch_timer context */
> > -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> > +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >  }
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index e95d4f6..d9e9697 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -46,6 +46,11 @@
> >  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
> >  };
> >  
> > +static const struct kvm_irq_level default_ptimer_irq = {
> > +	.irq	= 30,
> > +	.level	= 1,
> > +};
> > +
> >  static const struct kvm_irq_level default_vtimer_irq = {
> >  	.irq	= 27,
> >  	.level	= 1,
> > @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >  	const struct kvm_irq_level *cpu_vtimer_irq;
> > +	const struct kvm_irq_level *cpu_ptimer_irq;
> >  	const struct kvm_regs *cpu_reset;
> >  
> >  	switch (vcpu->arch.target) {
> > @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  		}
> >  
> >  		cpu_vtimer_irq = &default_vtimer_irq;
> > +		cpu_ptimer_irq = &default_ptimer_irq;
> >  		break;
> >  	}
> >  
> > @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  	kvm_pmu_vcpu_reset(vcpu);
> >  
> >  	/* Reset timer */
> > -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> > +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >  }
> > diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> > index 69f648b..a364593 100644
> > --- a/include/kvm/arm_arch_timer.h
> > +++ b/include/kvm/arm_arch_timer.h
> > @@ -59,7 +59,8 @@ struct arch_timer_cpu {
> >  int kvm_timer_enable(struct kvm_vcpu *vcpu);
> >  void kvm_timer_init(struct kvm *kvm);
> >  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> > -			 const struct kvm_irq_level *irq);
> > +			 const struct kvm_irq_level *virt_irq,
> > +			 const struct kvm_irq_level *phys_irq);
> >  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> >  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
> >  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index f72005a..0f6e935 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
> >  }
> >  
> >  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> > -			 const struct kvm_irq_level *irq)
> > +			 const struct kvm_irq_level *virt_irq,
> > +			 const struct kvm_irq_level *phys_irq)
> >  {
> >  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> > +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> >  
> >  	/*
> >  	 * The vcpu timer irq number cannot be determined in
> > @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	 * kvm_vcpu_set_target(). To handle this, we determine
> >  	 * vcpu timer irq number when the vcpu is reset.
> >  	 */
> > -	vtimer->irq.irq = irq->irq;
> > +	vtimer->irq.irq = virt_irq->irq;
> > +	ptimer->irq.irq = phys_irq->irq;
> >  
> >  	/*
> >  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> > @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >  	 * the ARMv7 architecture.
> >  	 */
> >  	vtimer->cnt_ctl = 0;
> > +	ptimer->cnt_ctl = 0;
> >  	kvm_timer_update_state(vcpu);
> >  
> >  	return 0;
> > @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >  
> >  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> > +	vcpu_ptimer(vcpu)->cntvoff = 0;
> 
> This is quite contentious, IMHO. Do we really want to expose the delta
> between the virtual and physical counters? That's a clear indication to
> the guest that it is virtualized. I"m not sure it matters, but I think
> we should at least make a conscious choice, and maybe give the
> opportunity to userspace to select the desired behaviour.
> 

So my understanding of the architecture is that you should always have
two timer/counter pairs available at EL1.  They may be synchronized, and
they may not.  If you want an accurate reading of wall clock time, look
at the physical counter, and that can generally be expected to be fast,
precise, and syncrhonized (on working hardware, of course).

Now, there can be a constant or potentially monotonically increasing
offset between the physial and virtual counters, which may mean you're
running under a hypervisor or (in the first case) that firmware
programmed or neglected to program cntvoff.  I don't think it's an
inherent problem to expose that difference to a guest, and I think it's
more important that reading the physical counter is fast and doesn't
trap.

The question is which contract we can have with a guest OS, and which
legacy we have to keep supporting (Linux, UEFI, ?).

Probably Linux should keep relying on the virtual counter/timer only,
unless something is advertised in DT/ACPI, about it being able to use
the physical timer/counter pair, even when booted at EL1.  We could
explore the opportunities to build on that to let the guest figure
out stolen time by reading the two counters and by programming the
proper timer depending on the desired semantics (i.e. virtual or
physical time).

In terms of this patch, I actually think it's fine, but we may need to
build something more on top later.  It is possible, though, that I'm
entirely missing the point about Linux timekeeping infrastructure and
that my reading of the architecture is bogus.

What do you think?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-29 15:21     ` Marc Zyngier
@ 2017-01-30 15:02       ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 15:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Now that we maintain the EL1 physical timer register states of VMs,
> > update the physical timer interrupt level along with the virtual one.
> >
> > Note that the emulated EL1 physical timer is not mapped to any hardware
> > timer, so we call a proper vgic function.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 0f6e935..3b6bd50 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >  	WARN_ON(ret);
> >  }
> >  
> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> > +				 struct arch_timer_context *timer)
> > +{
> > +	int ret;
> > +
> > +	BUG_ON(!vgic_initialized(vcpu->kvm));
> 
> Although I've added my fair share of BUG_ON() in the code base, I've
> since reconsidered my position. If we get in a situation where the vgic
> is not initialized, maybe it would be better to just WARN_ON and return
> early rather than killing the whole box. Thoughts?
> 

The distinction to me is whether this will cause fatal crashes or
exploits down the road if we're working on uninitialized data.  If all
that can happen if the vgic is not initialized, is that the guest
doesn't see interrupts, for example, then a WARN_ON is appropriate.

Which is the case here?

That being said, do we need this at all?  This is in the critial path
and is actually measurable (I know this from my work on the other timer
series), so it's better to get rid of it if we can.  Can we simply
convince ourselves this will never happen, and is the code ever likely
to change so that it gets called with the vgic disabled later?


Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 15:02       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 15:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Now that we maintain the EL1 physical timer register states of VMs,
> > update the physical timer interrupt level along with the virtual one.
> >
> > Note that the emulated EL1 physical timer is not mapped to any hardware
> > timer, so we call a proper vgic function.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 0f6e935..3b6bd50 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >  	WARN_ON(ret);
> >  }
> >  
> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> > +				 struct arch_timer_context *timer)
> > +{
> > +	int ret;
> > +
> > +	BUG_ON(!vgic_initialized(vcpu->kvm));
> 
> Although I've added my fair share of BUG_ON() in the code base, I've
> since reconsidered my position. If we get in a situation where the vgic
> is not initialized, maybe it would be better to just WARN_ON and return
> early rather than killing the whole box. Thoughts?
> 

The distinction to me is whether this will cause fatal crashes or
exploits down the road if we're working on uninitialized data.  If all
that can happen if the vgic is not initialized, is that the guest
doesn't see interrupts, for example, then a WARN_ON is appropriate.

Which is the case here?

That being said, do we need this at all?  This is in the critial path
and is actually measurable (I know this from my work on the other timer
series), so it's better to get rid of it if we can.  Can we simply
convince ourselves this will never happen, and is the code ever likely
to change so that it gets called with the vgic disabled later?


Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
  2017-01-29 15:44     ` Marc Zyngier
  (?)
@ 2017-01-30 17:08       ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Paolo Bonzini, Radim Krčmář,
	Christoffer Dall, linux, Catalin Marinas, will.deacon,
	andre.przywara, KVM General, linux-arm-kernel, kvmarm,
	linux-kernel

Hi Marc,

On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:05:00 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
>> Now VMs are able to use the EL1 physical timer.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
>>  include/kvm/arm_arch_timer.h |  2 ++
>>  virt/kvm/arm/arch_timer.c    |  2 +-
>>  3 files changed, 32 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index fd9e747..adf009f 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +     u64 now = kvm_phys_timer_read();
>> +
>> +     if (p->is_write)
>> +             ptimer->cnt_cval = p->regval + now;
>> +     else
>> +             p->regval = ptimer->cnt_cval - now;
>> +
>>       return true;
>>  }
>>
>> @@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +
>> +     if (p->is_write) {
>> +             /* ISTATUS bit is read-only */
>> +             ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
>> +     } else {
>> +             u64 now = kvm_phys_timer_read();
>> +
>> +             p->regval = ptimer->cnt_ctl;
>> +             /* Set ISTATUS bit if it's expired */
>> +             if (ptimer->cnt_cval <= now)
>> +                     p->regval |= ARCH_TIMER_CTRL_IT_STAT;
>> +     }
>
> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
> have at hand (version h) seems to indicate that we should, but we should
> check with the latest and greatest...

Thanks! I was not clear about this. I have ARM ARM version k, and it
says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
0, and just set istatus bit regardless of ENABLE bit. If this is not
what the manual meant, then I'm happy to fix this.

Thanks,
Jintack

>
>> +
>>       return true;
>>  }
>>
>> @@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +
>> +     if (p->is_write)
>> +             ptimer->cnt_cval = p->regval;
>> +     else
>> +             p->regval = ptimer->cnt_cval;
>> +
>>       return true;
>>  }
>>
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index a364593..fec99f2 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> +u64 kvm_phys_timer_read(void);
>> +
>>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>>
>>  void kvm_timer_init_vhe(void);
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index b366bb2..9eec063 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
>>       vcpu_vtimer(vcpu)->active_cleared_last = false;
>>  }
>>
>> -static u64 kvm_phys_timer_read(void)
>> +u64 kvm_phys_timer_read(void)
>>  {
>>       return timecounter->cc->read(timecounter->cc);
>>  }
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:08       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: KVM General, Catalin Marinas, will.deacon, linux, linux-kernel,
	linux-arm-kernel, andre.przywara, Paolo Bonzini, kvmarm

Hi Marc,

On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:05:00 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
>> Now VMs are able to use the EL1 physical timer.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
>>  include/kvm/arm_arch_timer.h |  2 ++
>>  virt/kvm/arm/arch_timer.c    |  2 +-
>>  3 files changed, 32 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index fd9e747..adf009f 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +     u64 now = kvm_phys_timer_read();
>> +
>> +     if (p->is_write)
>> +             ptimer->cnt_cval = p->regval + now;
>> +     else
>> +             p->regval = ptimer->cnt_cval - now;
>> +
>>       return true;
>>  }
>>
>> @@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +
>> +     if (p->is_write) {
>> +             /* ISTATUS bit is read-only */
>> +             ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
>> +     } else {
>> +             u64 now = kvm_phys_timer_read();
>> +
>> +             p->regval = ptimer->cnt_ctl;
>> +             /* Set ISTATUS bit if it's expired */
>> +             if (ptimer->cnt_cval <= now)
>> +                     p->regval |= ARCH_TIMER_CTRL_IT_STAT;
>> +     }
>
> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
> have at hand (version h) seems to indicate that we should, but we should
> check with the latest and greatest...

Thanks! I was not clear about this. I have ARM ARM version k, and it
says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
0, and just set istatus bit regardless of ENABLE bit. If this is not
what the manual meant, then I'm happy to fix this.

Thanks,
Jintack

>
>> +
>>       return true;
>>  }
>>
>> @@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +
>> +     if (p->is_write)
>> +             ptimer->cnt_cval = p->regval;
>> +     else
>> +             p->regval = ptimer->cnt_cval;
>> +
>>       return true;
>>  }
>>
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index a364593..fec99f2 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> +u64 kvm_phys_timer_read(void);
>> +
>>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>>
>>  void kvm_timer_init_vhe(void);
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index b366bb2..9eec063 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
>>       vcpu_vtimer(vcpu)->active_cleared_last = false;
>>  }
>>
>> -static u64 kvm_phys_timer_read(void)
>> +u64 kvm_phys_timer_read(void)
>>  {
>>       return timecounter->cc->read(timecounter->cc);
>>  }
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:08       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:08 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:05:00 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
>> Now VMs are able to use the EL1 physical timer.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm64/kvm/sys_regs.c    | 32 +++++++++++++++++++++++++++++---
>>  include/kvm/arm_arch_timer.h |  2 ++
>>  virt/kvm/arm/arch_timer.c    |  2 +-
>>  3 files changed, 32 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index fd9e747..adf009f 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -824,7 +824,14 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +     u64 now = kvm_phys_timer_read();
>> +
>> +     if (p->is_write)
>> +             ptimer->cnt_cval = p->regval + now;
>> +     else
>> +             p->regval = ptimer->cnt_cval - now;
>> +
>>       return true;
>>  }
>>
>> @@ -832,7 +839,20 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +
>> +     if (p->is_write) {
>> +             /* ISTATUS bit is read-only */
>> +             ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
>> +     } else {
>> +             u64 now = kvm_phys_timer_read();
>> +
>> +             p->regval = ptimer->cnt_ctl;
>> +             /* Set ISTATUS bit if it's expired */
>> +             if (ptimer->cnt_cval <= now)
>> +                     p->regval |= ARCH_TIMER_CTRL_IT_STAT;
>> +     }
>
> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
> have at hand (version h) seems to indicate that we should, but we should
> check with the latest and greatest...

Thanks! I was not clear about this. I have ARM ARM version k, and it
says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
0, and just set istatus bit regardless of ENABLE bit. If this is not
what the manual meant, then I'm happy to fix this.

Thanks,
Jintack

>
>> +
>>       return true;
>>  }
>>
>> @@ -840,7 +860,13 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>>               struct sys_reg_params *p,
>>               const struct sys_reg_desc *r)
>>  {
>> -     kvm_inject_undefined(vcpu);
>> +     struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>> +
>> +     if (p->is_write)
>> +             ptimer->cnt_cval = p->regval;
>> +     else
>> +             p->regval = ptimer->cnt_cval;
>> +
>>       return true;
>>  }
>>
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index a364593..fec99f2 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -74,6 +74,8 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> +u64 kvm_phys_timer_read(void);
>> +
>>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>>
>>  void kvm_timer_init_vhe(void);
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index b366bb2..9eec063 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -40,7 +40,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
>>       vcpu_vtimer(vcpu)->active_cleared_last = false;
>>  }
>>
>> -static u64 kvm_phys_timer_read(void)
>> +u64 kvm_phys_timer_read(void)
>>  {
>>       return timecounter->cc->read(timecounter->cc);
>>  }
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  2017-01-29 12:01     ` Marc Zyngier
@ 2017-01-30 17:17       ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:17 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Paolo Bonzini, Radim Krčmář,
	Christoffer Dall, linux, Catalin Marinas, will.deacon,
	andre.przywara, KVM General, linux-arm-kernel, kvmarm,
	linux-kernel

On Sun, Jan 29, 2017 at 7:01 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:04:53 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Now that we have a separate structure for timer context, make functions
>> general so that they can work with any timer context, not just the
>
>   generic?

yes, thanks!

>
>> virtual timer context.  This does not change the virtual timer
>> functionality.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/kvm/arm.c           |  2 +-
>>  include/kvm/arm_arch_timer.h |  3 ++-
>>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>>  3 files changed, 30 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 9d74464..9a34a3c 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>>
>>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>>  {
>> -     return kvm_timer_should_fire(vcpu);
>> +     return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>>  }
>>
>>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index 1b9c988..d921d20 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx);
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index fa4c042..f72005a 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>       kvm_vcpu_kick(vcpu);
>>  }
>>
>> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
>> +                                struct arch_timer_context *timer_ctx)
>>  {
>>       u64 cval, now;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>        * PoV (NTP on the host may have forced it to expire
>>        * early). If we should have slept longer, restart it.
>>        */
>> -     ns = kvm_timer_compute_delta(vcpu);
>> +     ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>>       if (unlikely(ns)) {
>>               hrtimer_forward_now(hrt, ns_to_ktime(ns));
>>               return HRTIMER_RESTART;
>> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>       return HRTIMER_NORESTART;
>>  }
>>
>> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
>> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>> -
>> -     return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> -             (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>> +     return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> +             (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>>  }
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 cval, now;
>>
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(timer_ctx))
>>               return false;
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       return cval <= now;
>>  }
>>
>> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> +                                     struct arch_timer_context *timer_ctx)
>>  {
>>       int ret;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> -     vtimer->active_cleared_last = false;
>> -     vtimer->irq.level = new_level;
>> -     trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
>> -                                vtimer->irq.level);
>> +     timer_ctx->active_cleared_last = false;
>> +     timer_ctx->irq.level = new_level;
>> +     trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
>> +                                timer_ctx->irq.level);
>>       ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
>> -                                      vtimer->irq.irq,
>> -                                      vtimer->irq.level);
>> +                                      timer_ctx->irq.irq,
>> +                                      timer_ctx->irq.level);
>>       WARN_ON(ret);
>>  }
>>
>> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>       if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>>               return -ENODEV;
>>
>> -     if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
>> -             kvm_timer_update_irq(vcpu, !vtimer->irq.level);
>> +     if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>> +             kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>>
>>       return 0;
>>  }
>> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(timer_is_armed(timer));
>>
>> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>        * already expired, because kvm_vcpu_block will return before putting
>>        * the thread to sleep.
>>        */
>> -     if (kvm_timer_should_fire(vcpu))
>> +     if (kvm_timer_should_fire(vcpu, vtimer))
>>               return;
>>
>>       /*
>>        * If the timer is not capable of raising interrupts (disabled or
>>        * masked), then there's no more work for us to do.
>>        */
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(vtimer))
>>               return;
>>
>>       /*  The timer has not yet expired, schedule a background timer */
>> -     timer_arm(timer, kvm_timer_compute_delta(vcpu));
>> +     timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>>  }
>>
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
>
>           M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-30 17:17       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:17 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 29, 2017 at 7:01 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:04:53 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Now that we have a separate structure for timer context, make functions
>> general so that they can work with any timer context, not just the
>
>   generic?

yes, thanks!

>
>> virtual timer context.  This does not change the virtual timer
>> functionality.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/kvm/arm.c           |  2 +-
>>  include/kvm/arm_arch_timer.h |  3 ++-
>>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>>  3 files changed, 30 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 9d74464..9a34a3c 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>>
>>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>>  {
>> -     return kvm_timer_should_fire(vcpu);
>> +     return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>>  }
>>
>>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index 1b9c988..d921d20 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx);
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index fa4c042..f72005a 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>       kvm_vcpu_kick(vcpu);
>>  }
>>
>> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
>> +                                struct arch_timer_context *timer_ctx)
>>  {
>>       u64 cval, now;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>        * PoV (NTP on the host may have forced it to expire
>>        * early). If we should have slept longer, restart it.
>>        */
>> -     ns = kvm_timer_compute_delta(vcpu);
>> +     ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>>       if (unlikely(ns)) {
>>               hrtimer_forward_now(hrt, ns_to_ktime(ns));
>>               return HRTIMER_RESTART;
>> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>       return HRTIMER_NORESTART;
>>  }
>>
>> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
>> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>> -
>> -     return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> -             (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>> +     return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> +             (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>>  }
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 cval, now;
>>
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(timer_ctx))
>>               return false;
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       return cval <= now;
>>  }
>>
>> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> +                                     struct arch_timer_context *timer_ctx)
>>  {
>>       int ret;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> -     vtimer->active_cleared_last = false;
>> -     vtimer->irq.level = new_level;
>> -     trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
>> -                                vtimer->irq.level);
>> +     timer_ctx->active_cleared_last = false;
>> +     timer_ctx->irq.level = new_level;
>> +     trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
>> +                                timer_ctx->irq.level);
>>       ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
>> -                                      vtimer->irq.irq,
>> -                                      vtimer->irq.level);
>> +                                      timer_ctx->irq.irq,
>> +                                      timer_ctx->irq.level);
>>       WARN_ON(ret);
>>  }
>>
>> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>       if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>>               return -ENODEV;
>>
>> -     if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
>> -             kvm_timer_update_irq(vcpu, !vtimer->irq.level);
>> +     if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>> +             kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>>
>>       return 0;
>>  }
>> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(timer_is_armed(timer));
>>
>> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>        * already expired, because kvm_vcpu_block will return before putting
>>        * the thread to sleep.
>>        */
>> -     if (kvm_timer_should_fire(vcpu))
>> +     if (kvm_timer_should_fire(vcpu, vtimer))
>>               return;
>>
>>       /*
>>        * If the timer is not capable of raising interrupts (disabled or
>>        * masked), then there's no more work for us to do.
>>        */
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(vtimer))
>>               return;
>>
>>       /*  The timer has not yet expired, schedule a background timer */
>> -     timer_arm(timer, kvm_timer_compute_delta(vcpu));
>> +     timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>>  }
>>
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
>
>           M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
  2017-01-30 14:49     ` Christoffer Dall
  (?)
@ 2017-01-30 17:18       ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:18 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Paolo Bonzini, Radim Krčmář,
	Marc Zyngier, linux, Catalin Marinas, will.deacon,
	andre.przywara, KVM General, linux-arm-kernel, kvmarm,
	linux-kernel

Hi Christoffer,

On Mon, Jan 30, 2017 at 9:49 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Thu, Jan 26, 2017 at 08:04:53PM -0500, Jintack Lim wrote:
>> Now that we have a separate structure for timer context, make functions
>> general so that they can work with any timer context, not just the
>> virtual timer context.  This does not change the virtual timer
>> functionality.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/kvm/arm.c           |  2 +-
>>  include/kvm/arm_arch_timer.h |  3 ++-
>>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>>  3 files changed, 30 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 9d74464..9a34a3c 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>>
>>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>>  {
>> -     return kvm_timer_should_fire(vcpu);
>> +     return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>>  }
>>
>>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index 1b9c988..d921d20 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx);
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index fa4c042..f72005a 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>       kvm_vcpu_kick(vcpu);
>>  }
>>
>> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
>> +                                struct arch_timer_context *timer_ctx)
>
> do you need the vcpu parameter here?

No, I'll remove it.

>
>>  {
>>       u64 cval, now;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>        * PoV (NTP on the host may have forced it to expire
>>        * early). If we should have slept longer, restart it.
>>        */
>> -     ns = kvm_timer_compute_delta(vcpu);
>> +     ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>>       if (unlikely(ns)) {
>>               hrtimer_forward_now(hrt, ns_to_ktime(ns));
>>               return HRTIMER_RESTART;
>> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>       return HRTIMER_NORESTART;
>>  }
>>
>> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
>> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>> -
>> -     return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> -             (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>> +     return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> +             (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>>  }
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx)
>
> do you need the vcpu parameter here?

No, I'll remove it.

Thanks,
Jintack

>
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 cval, now;
>>
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(timer_ctx))
>>               return false;
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       return cval <= now;
>>  }
>>
>> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> +                                     struct arch_timer_context *timer_ctx)
>>  {
>>       int ret;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> -     vtimer->active_cleared_last = false;
>> -     vtimer->irq.level = new_level;
>> -     trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
>> -                                vtimer->irq.level);
>> +     timer_ctx->active_cleared_last = false;
>> +     timer_ctx->irq.level = new_level;
>> +     trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
>> +                                timer_ctx->irq.level);
>>       ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
>> -                                      vtimer->irq.irq,
>> -                                      vtimer->irq.level);
>> +                                      timer_ctx->irq.irq,
>> +                                      timer_ctx->irq.level);
>>       WARN_ON(ret);
>>  }
>>
>> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>       if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>>               return -ENODEV;
>>
>> -     if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
>> -             kvm_timer_update_irq(vcpu, !vtimer->irq.level);
>> +     if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>> +             kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>>
>>       return 0;
>>  }
>> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(timer_is_armed(timer));
>>
>> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>        * already expired, because kvm_vcpu_block will return before putting
>>        * the thread to sleep.
>>        */
>> -     if (kvm_timer_should_fire(vcpu))
>> +     if (kvm_timer_should_fire(vcpu, vtimer))
>>               return;
>>
>>       /*
>>        * If the timer is not capable of raising interrupts (disabled or
>>        * masked), then there's no more work for us to do.
>>        */
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(vtimer))
>>               return;
>>
>>       /*  The timer has not yet expired, schedule a background timer */
>> -     timer_arm(timer, kvm_timer_compute_delta(vcpu));
>> +     timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>>  }
>>
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
>> --
>> 1.9.1
>>
>>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-30 17:18       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:18 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: KVM General, Marc Zyngier, Catalin Marinas, will.deacon, linux,
	linux-kernel, andre.przywara, Paolo Bonzini, kvmarm,
	linux-arm-kernel

Hi Christoffer,

On Mon, Jan 30, 2017 at 9:49 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Thu, Jan 26, 2017 at 08:04:53PM -0500, Jintack Lim wrote:
>> Now that we have a separate structure for timer context, make functions
>> general so that they can work with any timer context, not just the
>> virtual timer context.  This does not change the virtual timer
>> functionality.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/kvm/arm.c           |  2 +-
>>  include/kvm/arm_arch_timer.h |  3 ++-
>>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>>  3 files changed, 30 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 9d74464..9a34a3c 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>>
>>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>>  {
>> -     return kvm_timer_should_fire(vcpu);
>> +     return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>>  }
>>
>>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index 1b9c988..d921d20 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx);
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index fa4c042..f72005a 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>       kvm_vcpu_kick(vcpu);
>>  }
>>
>> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
>> +                                struct arch_timer_context *timer_ctx)
>
> do you need the vcpu parameter here?

No, I'll remove it.

>
>>  {
>>       u64 cval, now;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>        * PoV (NTP on the host may have forced it to expire
>>        * early). If we should have slept longer, restart it.
>>        */
>> -     ns = kvm_timer_compute_delta(vcpu);
>> +     ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>>       if (unlikely(ns)) {
>>               hrtimer_forward_now(hrt, ns_to_ktime(ns));
>>               return HRTIMER_RESTART;
>> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>       return HRTIMER_NORESTART;
>>  }
>>
>> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
>> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>> -
>> -     return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> -             (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>> +     return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> +             (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>>  }
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx)
>
> do you need the vcpu parameter here?

No, I'll remove it.

Thanks,
Jintack

>
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 cval, now;
>>
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(timer_ctx))
>>               return false;
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       return cval <= now;
>>  }
>>
>> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> +                                     struct arch_timer_context *timer_ctx)
>>  {
>>       int ret;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> -     vtimer->active_cleared_last = false;
>> -     vtimer->irq.level = new_level;
>> -     trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
>> -                                vtimer->irq.level);
>> +     timer_ctx->active_cleared_last = false;
>> +     timer_ctx->irq.level = new_level;
>> +     trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
>> +                                timer_ctx->irq.level);
>>       ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
>> -                                      vtimer->irq.irq,
>> -                                      vtimer->irq.level);
>> +                                      timer_ctx->irq.irq,
>> +                                      timer_ctx->irq.level);
>>       WARN_ON(ret);
>>  }
>>
>> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>       if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>>               return -ENODEV;
>>
>> -     if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
>> -             kvm_timer_update_irq(vcpu, !vtimer->irq.level);
>> +     if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>> +             kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>>
>>       return 0;
>>  }
>> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(timer_is_armed(timer));
>>
>> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>        * already expired, because kvm_vcpu_block will return before putting
>>        * the thread to sleep.
>>        */
>> -     if (kvm_timer_should_fire(vcpu))
>> +     if (kvm_timer_should_fire(vcpu, vtimer))
>>               return;
>>
>>       /*
>>        * If the timer is not capable of raising interrupts (disabled or
>>        * masked), then there's no more work for us to do.
>>        */
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(vtimer))
>>               return;
>>
>>       /*  The timer has not yet expired, schedule a background timer */
>> -     timer_arm(timer, kvm_timer_compute_delta(vcpu));
>> +     timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>>  }
>>
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
>> --
>> 1.9.1
>>
>>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer
@ 2017-01-30 17:18       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:18 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Christoffer,

On Mon, Jan 30, 2017 at 9:49 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Thu, Jan 26, 2017 at 08:04:53PM -0500, Jintack Lim wrote:
>> Now that we have a separate structure for timer context, make functions
>> general so that they can work with any timer context, not just the
>> virtual timer context.  This does not change the virtual timer
>> functionality.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/kvm/arm.c           |  2 +-
>>  include/kvm/arm_arch_timer.h |  3 ++-
>>  virt/kvm/arm/arch_timer.c    | 55 ++++++++++++++++++++++----------------------
>>  3 files changed, 30 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 9d74464..9a34a3c 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -301,7 +301,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>>
>>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>>  {
>> -     return kvm_timer_should_fire(vcpu);
>> +     return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
>>  }
>>
>>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index 1b9c988..d921d20 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -67,7 +67,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>  u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
>>  int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx);
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu);
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>>
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index fa4c042..f72005a 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -98,13 +98,13 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>       kvm_vcpu_kick(vcpu);
>>  }
>>
>> -static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>> +static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
>> +                                struct arch_timer_context *timer_ctx)
>
> do you need the vcpu parameter here?

No, I'll remove it.

>
>>  {
>>       u64 cval, now;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -133,7 +133,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>        * PoV (NTP on the host may have forced it to expire
>>        * early). If we should have slept longer, restart it.
>>        */
>> -     ns = kvm_timer_compute_delta(vcpu);
>> +     ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
>>       if (unlikely(ns)) {
>>               hrtimer_forward_now(hrt, ns_to_ktime(ns));
>>               return HRTIMER_RESTART;
>> @@ -143,42 +143,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
>>       return HRTIMER_NORESTART;
>>  }
>>
>> -static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
>> +static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>> -
>> -     return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> -             (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>> +     return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
>> +             (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
>>  }
>>
>> -bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>> +bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
>> +                        struct arch_timer_context *timer_ctx)
>
> do you need the vcpu parameter here?

No, I'll remove it.

Thanks,
Jintack

>
>>  {
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 cval, now;
>>
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(timer_ctx))
>>               return false;
>>
>> -     cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vtimer->cntvoff;
>> +     cval = timer_ctx->cnt_cval;
>> +     now = kvm_phys_timer_read() - timer_ctx->cntvoff;
>>
>>       return cval <= now;
>>  }
>>
>> -static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>> +static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> +                                     struct arch_timer_context *timer_ctx)
>>  {
>>       int ret;
>> -     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> -     vtimer->active_cleared_last = false;
>> -     vtimer->irq.level = new_level;
>> -     trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
>> -                                vtimer->irq.level);
>> +     timer_ctx->active_cleared_last = false;
>> +     timer_ctx->irq.level = new_level;
>> +     trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
>> +                                timer_ctx->irq.level);
>>       ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
>> -                                      vtimer->irq.irq,
>> -                                      vtimer->irq.level);
>> +                                      timer_ctx->irq.irq,
>> +                                      timer_ctx->irq.level);
>>       WARN_ON(ret);
>>  }
>>
>> @@ -200,8 +198,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>       if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
>>               return -ENODEV;
>>
>> -     if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
>> -             kvm_timer_update_irq(vcpu, !vtimer->irq.level);
>> +     if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
>> +             kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);
>>
>>       return 0;
>>  }
>> @@ -214,6 +212,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
>>  void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>>       BUG_ON(timer_is_armed(timer));
>>
>> @@ -222,18 +221,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
>>        * already expired, because kvm_vcpu_block will return before putting
>>        * the thread to sleep.
>>        */
>> -     if (kvm_timer_should_fire(vcpu))
>> +     if (kvm_timer_should_fire(vcpu, vtimer))
>>               return;
>>
>>       /*
>>        * If the timer is not capable of raising interrupts (disabled or
>>        * masked), then there's no more work for us to do.
>>        */
>> -     if (!kvm_timer_irq_can_fire(vcpu))
>> +     if (!kvm_timer_irq_can_fire(vtimer))
>>               return;
>>
>>       /*  The timer has not yet expired, schedule a background timer */
>> -     timer_arm(timer, kvm_timer_compute_delta(vcpu));
>> +     timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
>>  }
>>
>>  void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
>> --
>> 1.9.1
>>
>>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
  2017-01-30 17:08       ` Jintack Lim
  (?)
@ 2017-01-30 17:26         ` Peter Maydell
  -1 siblings, 0 replies; 127+ messages in thread
From: Peter Maydell @ 2017-01-30 17:26 UTC (permalink / raw)
  To: Jintack Lim
  Cc: Marc Zyngier, KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>> have at hand (version h) seems to indicate that we should, but we should
>> check with the latest and greatest...
>
> Thanks! I was not clear about this. I have ARM ARM version k, and it
> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
> 0, and just set istatus bit regardless of ENABLE bit. If this is not
> what the manual meant, then I'm happy to fix this.

It looks like the spec has been relaxed between the doc version
that Marc was looking at and the current one. So it's OK for
an implementation to either (a) set ISTATUS to 0 if ENABLE
is 0, or (b) do what you've done and set ISTATUS according
to the timer comparison whether ENABLE is clear or not
(or even (c) set ISTATUS to a random value if ENABLE is clear,
and other less likely choices).
I think we should add a comment to note that it's architecturally
UNKNOWN and we've made a choice for our implementation convenience.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:26         ` Peter Maydell
  0 siblings, 0 replies; 127+ messages in thread
From: Peter Maydell @ 2017-01-30 17:26 UTC (permalink / raw)
  To: Jintack Lim
  Cc: KVM General, Marc Zyngier, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, Andre Przywara, Paolo Bonzini,
	kvmarm, arm-mail-list

On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>> have at hand (version h) seems to indicate that we should, but we should
>> check with the latest and greatest...
>
> Thanks! I was not clear about this. I have ARM ARM version k, and it
> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
> 0, and just set istatus bit regardless of ENABLE bit. If this is not
> what the manual meant, then I'm happy to fix this.

It looks like the spec has been relaxed between the doc version
that Marc was looking at and the current one. So it's OK for
an implementation to either (a) set ISTATUS to 0 if ENABLE
is 0, or (b) do what you've done and set ISTATUS according
to the timer comparison whether ENABLE is clear or not
(or even (c) set ISTATUS to a random value if ENABLE is clear,
and other less likely choices).
I think we should add a comment to note that it's architecturally
UNKNOWN and we've made a choice for our implementation convenience.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:26         ` Peter Maydell
  0 siblings, 0 replies; 127+ messages in thread
From: Peter Maydell @ 2017-01-30 17:26 UTC (permalink / raw)
  To: linux-arm-kernel

On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>> have at hand (version h) seems to indicate that we should, but we should
>> check with the latest and greatest...
>
> Thanks! I was not clear about this. I have ARM ARM version k, and it
> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
> 0, and just set istatus bit regardless of ENABLE bit. If this is not
> what the manual meant, then I'm happy to fix this.

It looks like the spec has been relaxed between the doc version
that Marc was looking at and the current one. So it's OK for
an implementation to either (a) set ISTATUS to 0 if ENABLE
is 0, or (b) do what you've done and set ISTATUS according
to the timer comparison whether ENABLE is clear or not
(or even (c) set ISTATUS to a random value if ENABLE is clear,
and other less likely choices).
I think we should add a comment to note that it's architecturally
UNKNOWN and we've made a choice for our implementation convenience.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
  2017-01-30 17:26         ` Peter Maydell
  (?)
@ 2017-01-30 17:35           ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:35 UTC (permalink / raw)
  To: Peter Maydell, Jintack Lim
  Cc: KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

On 30/01/17 17:26, Peter Maydell wrote:
> On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>>> have at hand (version h) seems to indicate that we should, but we should
>>> check with the latest and greatest...
>>
>> Thanks! I was not clear about this. I have ARM ARM version k, and it
>> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
>> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
>> 0, and just set istatus bit regardless of ENABLE bit. If this is not
>> what the manual meant, then I'm happy to fix this.
> 
> It looks like the spec has been relaxed between the doc version
> that Marc was looking at and the current one. So it's OK for
> an implementation to either (a) set ISTATUS to 0 if ENABLE
> is 0, or (b) do what you've done and set ISTATUS according
> to the timer comparison whether ENABLE is clear or not
> (or even (c) set ISTATUS to a random value if ENABLE is clear,
> and other less likely choices).
> I think we should add a comment to note that it's architecturally
> UNKNOWN and we've made a choice for our implementation convenience.

In that case, the proposed implementation is perfectly fine. I'll retire
the old ARMv8 ARM from my laptop (funnily enough, I didn't fancy
downloading version k while on the train and having my phone as my link
to the outside world... ;-).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:35           ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:35 UTC (permalink / raw)
  To: Peter Maydell, Jintack Lim
  Cc: KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, Andre Przywara, Paolo Bonzini,
	kvmarm, arm-mail-list

On 30/01/17 17:26, Peter Maydell wrote:
> On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>>> have at hand (version h) seems to indicate that we should, but we should
>>> check with the latest and greatest...
>>
>> Thanks! I was not clear about this. I have ARM ARM version k, and it
>> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
>> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
>> 0, and just set istatus bit regardless of ENABLE bit. If this is not
>> what the manual meant, then I'm happy to fix this.
> 
> It looks like the spec has been relaxed between the doc version
> that Marc was looking at and the current one. So it's OK for
> an implementation to either (a) set ISTATUS to 0 if ENABLE
> is 0, or (b) do what you've done and set ISTATUS according
> to the timer comparison whether ENABLE is clear or not
> (or even (c) set ISTATUS to a random value if ENABLE is clear,
> and other less likely choices).
> I think we should add a comment to note that it's architecturally
> UNKNOWN and we've made a choice for our implementation convenience.

In that case, the proposed implementation is perfectly fine. I'll retire
the old ARMv8 ARM from my laptop (funnily enough, I didn't fancy
downloading version k while on the train and having my phone as my link
to the outside world... ;-).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:35           ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 17:26, Peter Maydell wrote:
> On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>>> have at hand (version h) seems to indicate that we should, but we should
>>> check with the latest and greatest...
>>
>> Thanks! I was not clear about this. I have ARM ARM version k, and it
>> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
>> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
>> 0, and just set istatus bit regardless of ENABLE bit. If this is not
>> what the manual meant, then I'm happy to fix this.
> 
> It looks like the spec has been relaxed between the doc version
> that Marc was looking at and the current one. So it's OK for
> an implementation to either (a) set ISTATUS to 0 if ENABLE
> is 0, or (b) do what you've done and set ISTATUS according
> to the timer comparison whether ENABLE is clear or not
> (or even (c) set ISTATUS to a random value if ENABLE is clear,
> and other less likely choices).
> I think we should add a comment to note that it's architecturally
> UNKNOWN and we've made a choice for our implementation convenience.

In that case, the proposed implementation is perfectly fine. I'll retire
the old ARMv8 ARM from my laptop (funnily enough, I didn't fancy
downloading version k while on the train and having my phone as my link
to the outside world... ;-).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
  2017-01-30 17:26         ` Peter Maydell
  (?)
@ 2017-01-30 17:38           ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:38 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Marc Zyngier, KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

Hi Peter,

On Mon, Jan 30, 2017 at 12:26 PM, Peter Maydell
<peter.maydell@linaro.org> wrote:
> On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>>> have at hand (version h) seems to indicate that we should, but we should
>>> check with the latest and greatest...
>>
>> Thanks! I was not clear about this. I have ARM ARM version k, and it
>> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
>> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
>> 0, and just set istatus bit regardless of ENABLE bit. If this is not
>> what the manual meant, then I'm happy to fix this.
>
> It looks like the spec has been relaxed between the doc version
> that Marc was looking at and the current one. So it's OK for
> an implementation to either (a) set ISTATUS to 0 if ENABLE
> is 0, or (b) do what you've done and set ISTATUS according
> to the timer comparison whether ENABLE is clear or not
> (or even (c) set ISTATUS to a random value if ENABLE is clear,
> and other less likely choices).
> I think we should add a comment to note that it's architecturally
> UNKNOWN and we've made a choice for our implementation convenience.

Thanks for the clarification. I'll put a comment in v3.

>
> thanks
> -- PMM
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:38           ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:38 UTC (permalink / raw)
  To: Peter Maydell
  Cc: KVM General, Marc Zyngier, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, Andre Przywara, Paolo Bonzini,
	kvmarm, arm-mail-list

Hi Peter,

On Mon, Jan 30, 2017 at 12:26 PM, Peter Maydell
<peter.maydell@linaro.org> wrote:
> On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>>> have at hand (version h) seems to indicate that we should, but we should
>>> check with the latest and greatest...
>>
>> Thanks! I was not clear about this. I have ARM ARM version k, and it
>> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
>> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
>> 0, and just set istatus bit regardless of ENABLE bit. If this is not
>> what the manual meant, then I'm happy to fix this.
>
> It looks like the spec has been relaxed between the doc version
> that Marc was looking at and the current one. So it's OK for
> an implementation to either (a) set ISTATUS to 0 if ENABLE
> is 0, or (b) do what you've done and set ISTATUS according
> to the timer comparison whether ENABLE is clear or not
> (or even (c) set ISTATUS to a random value if ENABLE is clear,
> and other less likely choices).
> I think we should add a comment to note that it's architecturally
> UNKNOWN and we've made a choice for our implementation convenience.

Thanks for the clarification. I'll put a comment in v3.

>
> thanks
> -- PMM
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access
@ 2017-01-30 17:38           ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Peter,

On Mon, Jan 30, 2017 at 12:26 PM, Peter Maydell
<peter.maydell@linaro.org> wrote:
> On 30 January 2017 at 17:08, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> On Sun, Jan 29, 2017 at 10:44 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> Shouldn't we take the ENABLE bit into account? The ARMv8 ARM version I
>>> have at hand (version h) seems to indicate that we should, but we should
>>> check with the latest and greatest...
>>
>> Thanks! I was not clear about this. I have ARM ARM version k, and it
>> says that 'When the value of the ENABLE bit is 0, the ISTATUS field is
>> UNKNOWN.' So I thought the istatus value doesn't matter if ENABLE is
>> 0, and just set istatus bit regardless of ENABLE bit. If this is not
>> what the manual meant, then I'm happy to fix this.
>
> It looks like the spec has been relaxed between the doc version
> that Marc was looking at and the current one. So it's OK for
> an implementation to either (a) set ISTATUS to 0 if ENABLE
> is 0, or (b) do what you've done and set ISTATUS according
> to the timer comparison whether ENABLE is clear or not
> (or even (c) set ISTATUS to a random value if ENABLE is clear,
> and other less likely choices).
> I think we should add a comment to note that it's architecturally
> UNKNOWN and we've made a choice for our implementation convenience.

Thanks for the clarification. I'll put a comment in v3.

>
> thanks
> -- PMM
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-30 14:51         ` Marc Zyngier
  (?)
@ 2017-01-30 17:40           ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:40 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Christoffer Dall, Paolo Bonzini, Radim Krčmář,
	linux, Catalin Marinas, Will Deacon, Andre Przywara, KVM General,
	arm-mail-list, kvmarm, lkml - Kernel Mailing List

On Mon, Jan 30, 2017 at 9:51 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 30/01/17 14:45, Christoffer Dall wrote:
>> On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
>>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>>> timer functions to work with timer context without considering timer
>>>> types (e.g. physical timer or virtual timer).
>>>>
>>>> This also would pave the way for ever doing adjustments of the cntvoff
>>>> on a per-CPU basis if that should ever make sense.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> ---
>>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>> index d5423ab..f5456a9 100644
>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>>     /* The last vcpu id that ran on each physical CPU */
>>>>     int __percpu *last_vcpu_ran;
>>>>
>>>> -   /* Timer */
>>>> -   struct arch_timer_kvm   timer;
>>>> -
>>>>     /*
>>>>      * Anything that is not used directly from assembly code goes
>>>>      * here.
>>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>>     /* Stage-2 page table */
>>>>     pgd_t *pgd;
>>>>
>>>> +   /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +   spinlock_t cntvoff_lock;
>>>
>>> Is there any condition where we need this to be a spinlock? I would have
>>> thought that a mutex should have been enough, as this should only be
>>> updated on migration or initialization. Not that it matters much in this
>>> case, but I wondered if there is something I'm missing.
>>>
>>
>> I would think the critical section is small enough that a spinlock makes
>> sense, but what I don't think we need is to add the additional lock.
>>
>> I think just taking the kvm->lock should be sufficient, which happens to
>> be a mutex, and while that may be a bit slower to take than the
>> spinlock, it's not in the critical path so let's just keep things
>> simple.
>>
>> Perhaps this what Marc also meant.
>
> That would be the logical conclusion, assuming that we can sleep on this
> path.

All right. I'll take kvm->lock there.

Thanks,
Jintack

>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny...
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 17:40           ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:40 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

On Mon, Jan 30, 2017 at 9:51 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 30/01/17 14:45, Christoffer Dall wrote:
>> On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
>>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>>> timer functions to work with timer context without considering timer
>>>> types (e.g. physical timer or virtual timer).
>>>>
>>>> This also would pave the way for ever doing adjustments of the cntvoff
>>>> on a per-CPU basis if that should ever make sense.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> ---
>>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>> index d5423ab..f5456a9 100644
>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>>     /* The last vcpu id that ran on each physical CPU */
>>>>     int __percpu *last_vcpu_ran;
>>>>
>>>> -   /* Timer */
>>>> -   struct arch_timer_kvm   timer;
>>>> -
>>>>     /*
>>>>      * Anything that is not used directly from assembly code goes
>>>>      * here.
>>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>>     /* Stage-2 page table */
>>>>     pgd_t *pgd;
>>>>
>>>> +   /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +   spinlock_t cntvoff_lock;
>>>
>>> Is there any condition where we need this to be a spinlock? I would have
>>> thought that a mutex should have been enough, as this should only be
>>> updated on migration or initialization. Not that it matters much in this
>>> case, but I wondered if there is something I'm missing.
>>>
>>
>> I would think the critical section is small enough that a spinlock makes
>> sense, but what I don't think we need is to add the additional lock.
>>
>> I think just taking the kvm->lock should be sufficient, which happens to
>> be a mutex, and while that may be a bit slower to take than the
>> spinlock, it's not in the critical path so let's just keep things
>> simple.
>>
>> Perhaps this what Marc also meant.
>
> That would be the logical conclusion, assuming that we can sleep on this
> path.

All right. I'll take kvm->lock there.

Thanks,
Jintack

>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny...
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 17:40           ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 30, 2017 at 9:51 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 30/01/17 14:45, Christoffer Dall wrote:
>> On Sun, Jan 29, 2017 at 11:54:05AM +0000, Marc Zyngier wrote:
>>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>>> timer functions to work with timer context without considering timer
>>>> types (e.g. physical timer or virtual timer).
>>>>
>>>> This also would pave the way for ever doing adjustments of the cntvoff
>>>> on a per-CPU basis if that should ever make sense.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> ---
>>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>> index d5423ab..f5456a9 100644
>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>>     /* The last vcpu id that ran on each physical CPU */
>>>>     int __percpu *last_vcpu_ran;
>>>>
>>>> -   /* Timer */
>>>> -   struct arch_timer_kvm   timer;
>>>> -
>>>>     /*
>>>>      * Anything that is not used directly from assembly code goes
>>>>      * here.
>>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>>     /* Stage-2 page table */
>>>>     pgd_t *pgd;
>>>>
>>>> +   /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +   spinlock_t cntvoff_lock;
>>>
>>> Is there any condition where we need this to be a spinlock? I would have
>>> thought that a mutex should have been enough, as this should only be
>>> updated on migration or initialization. Not that it matters much in this
>>> case, but I wondered if there is something I'm missing.
>>>
>>
>> I would think the critical section is small enough that a spinlock makes
>> sense, but what I don't think we need is to add the additional lock.
>>
>> I think just taking the kvm->lock should be sufficient, which happens to
>> be a mutex, and while that may be a bit slower to take than the
>> spinlock, it's not in the critical path so let's just keep things
>> simple.
>>
>> Perhaps this what Marc also meant.
>
> That would be the logical conclusion, assuming that we can sleep on this
> path.

All right. I'll take kvm->lock there.

Thanks,
Jintack

>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny...
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
  2017-01-30 14:58       ` Christoffer Dall
@ 2017-01-30 17:44         ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:44 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel, Peter Maydell

On 30/01/17 14:58, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Initialize the emulated EL1 physical timer with the default irq number.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/kvm/reset.c         | 9 ++++++++-
>>>  arch/arm64/kvm/reset.c       | 9 ++++++++-
>>>  include/kvm/arm_arch_timer.h | 3 ++-
>>>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
>>>  4 files changed, 25 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
>>> index 4b5e802..1da8b2d 100644
>>> --- a/arch/arm/kvm/reset.c
>>> +++ b/arch/arm/kvm/reset.c
>>> @@ -37,6 +37,11 @@
>>>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
>>>  };
>>>  
>>> +static const struct kvm_irq_level cortexa_ptimer_irq = {
>>> +	{ .irq = 30 },
>>> +	.level = 1,
>>> +};
>>
>> At some point, we'll have to make that discoverable/configurable. Maybe
>> the time for a discoverable arch timer has come (see below).
>>
>>> +
>>>  static const struct kvm_irq_level cortexa_vtimer_irq = {
>>>  	{ .irq = 27 },
>>>  	.level = 1,
>>> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  {
>>>  	struct kvm_regs *reset_regs;
>>>  	const struct kvm_irq_level *cpu_vtimer_irq;
>>> +	const struct kvm_irq_level *cpu_ptimer_irq;
>>>  
>>>  	switch (vcpu->arch.target) {
>>>  	case KVM_ARM_TARGET_CORTEX_A7:
>>> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  		reset_regs = &cortexa_regs_reset;
>>>  		vcpu->arch.midr = read_cpuid_id();
>>>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
>>> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
>>>  		break;
>>>  	default:
>>>  		return -ENODEV;
>>> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  	kvm_reset_coprocs(vcpu);
>>>  
>>>  	/* Reset arch_timer context */
>>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
>>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>>>  }
>>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>>> index e95d4f6..d9e9697 100644
>>> --- a/arch/arm64/kvm/reset.c
>>> +++ b/arch/arm64/kvm/reset.c
>>> @@ -46,6 +46,11 @@
>>>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
>>>  };
>>>  
>>> +static const struct kvm_irq_level default_ptimer_irq = {
>>> +	.irq	= 30,
>>> +	.level	= 1,
>>> +};
>>> +
>>>  static const struct kvm_irq_level default_vtimer_irq = {
>>>  	.irq	= 27,
>>>  	.level	= 1,
>>> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
>>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  {
>>>  	const struct kvm_irq_level *cpu_vtimer_irq;
>>> +	const struct kvm_irq_level *cpu_ptimer_irq;
>>>  	const struct kvm_regs *cpu_reset;
>>>  
>>>  	switch (vcpu->arch.target) {
>>> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  		}
>>>  
>>>  		cpu_vtimer_irq = &default_vtimer_irq;
>>> +		cpu_ptimer_irq = &default_ptimer_irq;
>>>  		break;
>>>  	}
>>>  
>>> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  	kvm_pmu_vcpu_reset(vcpu);
>>>  
>>>  	/* Reset timer */
>>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
>>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>>>  }
>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>> index 69f648b..a364593 100644
>>> --- a/include/kvm/arm_arch_timer.h
>>> +++ b/include/kvm/arm_arch_timer.h
>>> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
>>>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
>>>  void kvm_timer_init(struct kvm *kvm);
>>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>> -			 const struct kvm_irq_level *irq);
>>> +			 const struct kvm_irq_level *virt_irq,
>>> +			 const struct kvm_irq_level *phys_irq);
>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
>>>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
>>>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index f72005a..0f6e935 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
>>>  }
>>>  
>>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>> -			 const struct kvm_irq_level *irq)
>>> +			 const struct kvm_irq_level *virt_irq,
>>> +			 const struct kvm_irq_level *phys_irq)
>>>  {
>>>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>>>  
>>>  	/*
>>>  	 * The vcpu timer irq number cannot be determined in
>>> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>  	 * kvm_vcpu_set_target(). To handle this, we determine
>>>  	 * vcpu timer irq number when the vcpu is reset.
>>>  	 */
>>> -	vtimer->irq.irq = irq->irq;
>>> +	vtimer->irq.irq = virt_irq->irq;
>>> +	ptimer->irq.irq = phys_irq->irq;
>>>  
>>>  	/*
>>>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
>>> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>  	 * the ARMv7 architecture.
>>>  	 */
>>>  	vtimer->cnt_ctl = 0;
>>> +	ptimer->cnt_ctl = 0;
>>>  	kvm_timer_update_state(vcpu);
>>>  
>>>  	return 0;
>>> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>  
>>>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>> +	vcpu_ptimer(vcpu)->cntvoff = 0;
>>
>> This is quite contentious, IMHO. Do we really want to expose the delta
>> between the virtual and physical counters? That's a clear indication to
>> the guest that it is virtualized. I"m not sure it matters, but I think
>> we should at least make a conscious choice, and maybe give the
>> opportunity to userspace to select the desired behaviour.
>>
> 
> So my understanding of the architecture is that you should always have
> two timer/counter pairs available at EL1.  They may be synchronized, and
> they may not.  If you want an accurate reading of wall clock time, look
> at the physical counter, and that can generally be expected to be fast,
> precise, and syncrhonized (on working hardware, of course).
> 
> Now, there can be a constant or potentially monotonically increasing
> offset between the physial and virtual counters, which may mean you're
> running under a hypervisor or (in the first case) that firmware
> programmed or neglected to program cntvoff.  I don't think it's an
> inherent problem to expose that difference to a guest, and I think it's
> more important that reading the physical counter is fast and doesn't
> trap.
> 
> The question is which contract we can have with a guest OS, and which
> legacy we have to keep supporting (Linux, UEFI, ?).
> 
> Probably Linux should keep relying on the virtual counter/timer only,
> unless something is advertised in DT/ACPI, about it being able to use
> the physical timer/counter pair, even when booted at EL1.  We could
> explore the opportunities to build on that to let the guest figure
> out stolen time by reading the two counters and by programming the
> proper timer depending on the desired semantics (i.e. virtual or
> physical time).
> 
> In terms of this patch, I actually think it's fine, but we may need to
> build something more on top later.  It is possible, though, that I'm
> entirely missing the point about Linux timekeeping infrastructure and
> that my reading of the architecture is bogus.
> 
> What do you think?

I don't disagree with any of this (hopefully I was clearer in my reply
to the cover letter). Wventually, we'll have to support an offset-able
physical counter to support nested virtualization, but this can come at
a later time.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-30 17:44         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:44 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 14:58, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Initialize the emulated EL1 physical timer with the default irq number.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/kvm/reset.c         | 9 ++++++++-
>>>  arch/arm64/kvm/reset.c       | 9 ++++++++-
>>>  include/kvm/arm_arch_timer.h | 3 ++-
>>>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
>>>  4 files changed, 25 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
>>> index 4b5e802..1da8b2d 100644
>>> --- a/arch/arm/kvm/reset.c
>>> +++ b/arch/arm/kvm/reset.c
>>> @@ -37,6 +37,11 @@
>>>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
>>>  };
>>>  
>>> +static const struct kvm_irq_level cortexa_ptimer_irq = {
>>> +	{ .irq = 30 },
>>> +	.level = 1,
>>> +};
>>
>> At some point, we'll have to make that discoverable/configurable. Maybe
>> the time for a discoverable arch timer has come (see below).
>>
>>> +
>>>  static const struct kvm_irq_level cortexa_vtimer_irq = {
>>>  	{ .irq = 27 },
>>>  	.level = 1,
>>> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  {
>>>  	struct kvm_regs *reset_regs;
>>>  	const struct kvm_irq_level *cpu_vtimer_irq;
>>> +	const struct kvm_irq_level *cpu_ptimer_irq;
>>>  
>>>  	switch (vcpu->arch.target) {
>>>  	case KVM_ARM_TARGET_CORTEX_A7:
>>> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  		reset_regs = &cortexa_regs_reset;
>>>  		vcpu->arch.midr = read_cpuid_id();
>>>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
>>> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
>>>  		break;
>>>  	default:
>>>  		return -ENODEV;
>>> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  	kvm_reset_coprocs(vcpu);
>>>  
>>>  	/* Reset arch_timer context */
>>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
>>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>>>  }
>>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>>> index e95d4f6..d9e9697 100644
>>> --- a/arch/arm64/kvm/reset.c
>>> +++ b/arch/arm64/kvm/reset.c
>>> @@ -46,6 +46,11 @@
>>>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
>>>  };
>>>  
>>> +static const struct kvm_irq_level default_ptimer_irq = {
>>> +	.irq	= 30,
>>> +	.level	= 1,
>>> +};
>>> +
>>>  static const struct kvm_irq_level default_vtimer_irq = {
>>>  	.irq	= 27,
>>>  	.level	= 1,
>>> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
>>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  {
>>>  	const struct kvm_irq_level *cpu_vtimer_irq;
>>> +	const struct kvm_irq_level *cpu_ptimer_irq;
>>>  	const struct kvm_regs *cpu_reset;
>>>  
>>>  	switch (vcpu->arch.target) {
>>> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  		}
>>>  
>>>  		cpu_vtimer_irq = &default_vtimer_irq;
>>> +		cpu_ptimer_irq = &default_ptimer_irq;
>>>  		break;
>>>  	}
>>>  
>>> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>>  	kvm_pmu_vcpu_reset(vcpu);
>>>  
>>>  	/* Reset timer */
>>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
>>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
>>>  }
>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>> index 69f648b..a364593 100644
>>> --- a/include/kvm/arm_arch_timer.h
>>> +++ b/include/kvm/arm_arch_timer.h
>>> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
>>>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
>>>  void kvm_timer_init(struct kvm *kvm);
>>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>> -			 const struct kvm_irq_level *irq);
>>> +			 const struct kvm_irq_level *virt_irq,
>>> +			 const struct kvm_irq_level *phys_irq);
>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
>>>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
>>>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index f72005a..0f6e935 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
>>>  }
>>>  
>>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>> -			 const struct kvm_irq_level *irq)
>>> +			 const struct kvm_irq_level *virt_irq,
>>> +			 const struct kvm_irq_level *phys_irq)
>>>  {
>>>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
>>>  
>>>  	/*
>>>  	 * The vcpu timer irq number cannot be determined in
>>> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>  	 * kvm_vcpu_set_target(). To handle this, we determine
>>>  	 * vcpu timer irq number when the vcpu is reset.
>>>  	 */
>>> -	vtimer->irq.irq = irq->irq;
>>> +	vtimer->irq.irq = virt_irq->irq;
>>> +	ptimer->irq.irq = phys_irq->irq;
>>>  
>>>  	/*
>>>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
>>> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>  	 * the ARMv7 architecture.
>>>  	 */
>>>  	vtimer->cnt_ctl = 0;
>>> +	ptimer->cnt_ctl = 0;
>>>  	kvm_timer_update_state(vcpu);
>>>  
>>>  	return 0;
>>> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>  
>>>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>> +	vcpu_ptimer(vcpu)->cntvoff = 0;
>>
>> This is quite contentious, IMHO. Do we really want to expose the delta
>> between the virtual and physical counters? That's a clear indication to
>> the guest that it is virtualized. I"m not sure it matters, but I think
>> we should at least make a conscious choice, and maybe give the
>> opportunity to userspace to select the desired behaviour.
>>
> 
> So my understanding of the architecture is that you should always have
> two timer/counter pairs available at EL1.  They may be synchronized, and
> they may not.  If you want an accurate reading of wall clock time, look
> at the physical counter, and that can generally be expected to be fast,
> precise, and syncrhonized (on working hardware, of course).
> 
> Now, there can be a constant or potentially monotonically increasing
> offset between the physial and virtual counters, which may mean you're
> running under a hypervisor or (in the first case) that firmware
> programmed or neglected to program cntvoff.  I don't think it's an
> inherent problem to expose that difference to a guest, and I think it's
> more important that reading the physical counter is fast and doesn't
> trap.
> 
> The question is which contract we can have with a guest OS, and which
> legacy we have to keep supporting (Linux, UEFI, ?).
> 
> Probably Linux should keep relying on the virtual counter/timer only,
> unless something is advertised in DT/ACPI, about it being able to use
> the physical timer/counter pair, even when booted at EL1.  We could
> explore the opportunities to build on that to let the guest figure
> out stolen time by reading the two counters and by programming the
> proper timer depending on the desired semantics (i.e. virtual or
> physical time).
> 
> In terms of this patch, I actually think it's fine, but we may need to
> build something more on top later.  It is possible, though, that I'm
> entirely missing the point about Linux timekeeping infrastructure and
> that my reading of the architecture is bogus.
> 
> What do you think?

I don't disagree with any of this (hopefully I was clearer in my reply
to the cover letter). Wventually, we'll have to support an offset-able
physical counter to support nested virtualization, but this can come at
a later time.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-30 15:02       ` Christoffer Dall
  (?)
@ 2017-01-30 17:50         ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:50 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On 30/01/17 15:02, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Now that we maintain the EL1 physical timer register states of VMs,
>>> update the physical timer interrupt level along with the virtual one.
>>>
>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>> timer, so we call a proper vgic function.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>  1 file changed, 20 insertions(+)
>>>
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 0f6e935..3b6bd50 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>  	WARN_ON(ret);
>>>  }
>>>  
>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>> +				 struct arch_timer_context *timer)
>>> +{
>>> +	int ret;
>>> +
>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
> 
> The distinction to me is whether this will cause fatal crashes or
> exploits down the road if we're working on uninitialized data.  If all
> that can happen if the vgic is not initialized, is that the guest
> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> 
> Which is the case here?
> 
> That being said, do we need this at all?  This is in the critial path
> and is actually measurable (I know this from my work on the other timer
> series), so it's better to get rid of it if we can.  Can we simply
> convince ourselves this will never happen, and is the code ever likely
> to change so that it gets called with the vgic disabled later?

That'd be the best course of action. I remember us reworking some of
that in the now defunct vgic-less series. Maybe we could salvage that
code, if only for the time we spent on it...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 17:50         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:50 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On 30/01/17 15:02, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Now that we maintain the EL1 physical timer register states of VMs,
>>> update the physical timer interrupt level along with the virtual one.
>>>
>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>> timer, so we call a proper vgic function.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>  1 file changed, 20 insertions(+)
>>>
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 0f6e935..3b6bd50 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>  	WARN_ON(ret);
>>>  }
>>>  
>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>> +				 struct arch_timer_context *timer)
>>> +{
>>> +	int ret;
>>> +
>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
> 
> The distinction to me is whether this will cause fatal crashes or
> exploits down the road if we're working on uninitialized data.  If all
> that can happen if the vgic is not initialized, is that the guest
> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> 
> Which is the case here?
> 
> That being said, do we need this at all?  This is in the critial path
> and is actually measurable (I know this from my work on the other timer
> series), so it's better to get rid of it if we can.  Can we simply
> convince ourselves this will never happen, and is the code ever likely
> to change so that it gets called with the vgic disabled later?

That'd be the best course of action. I remember us reworking some of
that in the now defunct vgic-less series. Maybe we could salvage that
code, if only for the time we spent on it...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 17:50         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 17:50 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 15:02, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Now that we maintain the EL1 physical timer register states of VMs,
>>> update the physical timer interrupt level along with the virtual one.
>>>
>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>> timer, so we call a proper vgic function.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>  1 file changed, 20 insertions(+)
>>>
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 0f6e935..3b6bd50 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>  	WARN_ON(ret);
>>>  }
>>>  
>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>> +				 struct arch_timer_context *timer)
>>> +{
>>> +	int ret;
>>> +
>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
> 
> The distinction to me is whether this will cause fatal crashes or
> exploits down the road if we're working on uninitialized data.  If all
> that can happen if the vgic is not initialized, is that the guest
> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> 
> Which is the case here?
> 
> That being said, do we need this at all?  This is in the critial path
> and is actually measurable (I know this from my work on the other timer
> series), so it's better to get rid of it if we can.  Can we simply
> convince ourselves this will never happen, and is the code ever likely
> to change so that it gets called with the vgic disabled later?

That'd be the best course of action. I remember us reworking some of
that in the now defunct vgic-less series. Maybe we could salvage that
code, if only for the time we spent on it...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-29 11:54     ` Marc Zyngier
  (?)
@ 2017-01-30 17:58       ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Paolo Bonzini, Radim Krčmář,
	Christoffer Dall, linux, Catalin Marinas, Will Deacon,
	Andre Przywara, KVM General, arm-mail-list, kvmarm,
	lkml - Kernel Mailing List

On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Make cntvoff per each timer context. This is helpful to abstract kvm
>> timer functions to work with timer context without considering timer
>> types (e.g. physical timer or virtual timer).
>>
>> This also would pave the way for ever doing adjustments of the cntvoff
>> on a per-CPU basis if that should ever make sense.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index d5423ab..f5456a9 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>       /* The last vcpu id that ran on each physical CPU */
>>       int __percpu *last_vcpu_ran;
>>
>> -     /* Timer */
>> -     struct arch_timer_kvm   timer;
>> -
>>       /*
>>        * Anything that is not used directly from assembly code goes
>>        * here.
>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>       /* Stage-2 page table */
>>       pgd_t *pgd;
>>
>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>> +     spinlock_t cntvoff_lock;
>
> Is there any condition where we need this to be a spinlock? I would have
> thought that a mutex should have been enough, as this should only be
> updated on migration or initialization. Not that it matters much in this
> case, but I wondered if there is something I'm missing.
>
>> +
>>       /* Interrupt controller */
>>       struct vgic_dist        vgic;
>>       int max_vcpus;
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index e505038..23749a8 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>       /* Interrupt controller */
>>       struct vgic_dist        vgic;
>>
>> -     /* Timer */
>> -     struct arch_timer_kvm   timer;
>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>> +     spinlock_t cntvoff_lock;
>>  };
>>
>>  #define KVM_NR_MEM_OBJS     40
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index daad3c1..1b9c988 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -23,11 +23,6 @@
>>  #include <linux/hrtimer.h>
>>  #include <linux/workqueue.h>
>>
>> -struct arch_timer_kvm {
>> -     /* Virtual offset */
>> -     u64                     cntvoff;
>> -};
>> -
>>  struct arch_timer_context {
>>       /* Registers: control register, timer value */
>>       u32                             cnt_ctl;
>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>
>>       /* Active IRQ state caching */
>>       bool                            active_cleared_last;
>> +
>> +     /* Virtual offset */
>> +     u64                     cntvoff;
>>  };
>>
>>  struct arch_timer_cpu {
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index 6740efa..fa4c042 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>  {
>>       u64 cval, now;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +     cval = vtimer->cnt_cval;
>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>               return false;
>>
>>       cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>
>>       return cval <= now;
>>  }
>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>       return 0;
>>  }
>>
>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>
> Arguably, this acts on the VM itself and not a single vcpu. maybe you
> should consider passing the struct kvm pointer to reflect this.
>

Yes, that would be better.

>> +{
>> +     int i;
>> +
>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>> +}
>> +
>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>
>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>
> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
> be welcome (this is not a change in semantics, but it was never obvious
> in the existing code).

I'll add a comment. In fact, I was told to make cntvoff synchronized
across all the vcpus, but I'm afraid that I understand why. Could you
explain me where this constraint comes from?

>
>> +
>>       INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>>       hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
>>       timer->timer.function = kvm_timer_expire;
>> @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
>>               vtimer->cnt_ctl = value;
>>               break;
>>       case KVM_REG_ARM_TIMER_CNT:
>> -             vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
>> +             update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
>>               break;
>>       case KVM_REG_ARM_TIMER_CVAL:
>>               vtimer->cnt_cval = value;
>> @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
>>       case KVM_REG_ARM_TIMER_CTL:
>>               return vtimer->cnt_ctl;
>>       case KVM_REG_ARM_TIMER_CNT:
>> -             return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +             return kvm_phys_timer_read() - vtimer->cntvoff;
>>       case KVM_REG_ARM_TIMER_CVAL:
>>               return vtimer->cnt_cval;
>>       }
>> @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
>>
>>  void kvm_timer_init(struct kvm *kvm)
>>  {
>> -     kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>> +     spin_lock_init(&kvm->arch.cntvoff_lock);
>>  }
>>
>>  /*
>> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
>> index 0cf0895..4734915 100644
>> --- a/virt/kvm/arm/hyp/timer-sr.c
>> +++ b/virt/kvm/arm/hyp/timer-sr.c
>> @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>>
>>  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>>  {
>> -     struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>       struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 val;
>> @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>>       }
>>
>>       if (timer->enabled) {
>> -             write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
>> +             write_sysreg(vtimer->cntvoff, cntvoff_el2);
>>               write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
>>               isb();
>>               write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 17:58       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Make cntvoff per each timer context. This is helpful to abstract kvm
>> timer functions to work with timer context without considering timer
>> types (e.g. physical timer or virtual timer).
>>
>> This also would pave the way for ever doing adjustments of the cntvoff
>> on a per-CPU basis if that should ever make sense.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index d5423ab..f5456a9 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>       /* The last vcpu id that ran on each physical CPU */
>>       int __percpu *last_vcpu_ran;
>>
>> -     /* Timer */
>> -     struct arch_timer_kvm   timer;
>> -
>>       /*
>>        * Anything that is not used directly from assembly code goes
>>        * here.
>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>       /* Stage-2 page table */
>>       pgd_t *pgd;
>>
>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>> +     spinlock_t cntvoff_lock;
>
> Is there any condition where we need this to be a spinlock? I would have
> thought that a mutex should have been enough, as this should only be
> updated on migration or initialization. Not that it matters much in this
> case, but I wondered if there is something I'm missing.
>
>> +
>>       /* Interrupt controller */
>>       struct vgic_dist        vgic;
>>       int max_vcpus;
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index e505038..23749a8 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>       /* Interrupt controller */
>>       struct vgic_dist        vgic;
>>
>> -     /* Timer */
>> -     struct arch_timer_kvm   timer;
>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>> +     spinlock_t cntvoff_lock;
>>  };
>>
>>  #define KVM_NR_MEM_OBJS     40
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index daad3c1..1b9c988 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -23,11 +23,6 @@
>>  #include <linux/hrtimer.h>
>>  #include <linux/workqueue.h>
>>
>> -struct arch_timer_kvm {
>> -     /* Virtual offset */
>> -     u64                     cntvoff;
>> -};
>> -
>>  struct arch_timer_context {
>>       /* Registers: control register, timer value */
>>       u32                             cnt_ctl;
>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>
>>       /* Active IRQ state caching */
>>       bool                            active_cleared_last;
>> +
>> +     /* Virtual offset */
>> +     u64                     cntvoff;
>>  };
>>
>>  struct arch_timer_cpu {
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index 6740efa..fa4c042 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>  {
>>       u64 cval, now;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +     cval = vtimer->cnt_cval;
>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>               return false;
>>
>>       cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>
>>       return cval <= now;
>>  }
>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>       return 0;
>>  }
>>
>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>
> Arguably, this acts on the VM itself and not a single vcpu. maybe you
> should consider passing the struct kvm pointer to reflect this.
>

Yes, that would be better.

>> +{
>> +     int i;
>> +
>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>> +}
>> +
>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>
>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>
> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
> be welcome (this is not a change in semantics, but it was never obvious
> in the existing code).

I'll add a comment. In fact, I was told to make cntvoff synchronized
across all the vcpus, but I'm afraid that I understand why. Could you
explain me where this constraint comes from?

>
>> +
>>       INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>>       hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
>>       timer->timer.function = kvm_timer_expire;
>> @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
>>               vtimer->cnt_ctl = value;
>>               break;
>>       case KVM_REG_ARM_TIMER_CNT:
>> -             vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
>> +             update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
>>               break;
>>       case KVM_REG_ARM_TIMER_CVAL:
>>               vtimer->cnt_cval = value;
>> @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
>>       case KVM_REG_ARM_TIMER_CTL:
>>               return vtimer->cnt_ctl;
>>       case KVM_REG_ARM_TIMER_CNT:
>> -             return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +             return kvm_phys_timer_read() - vtimer->cntvoff;
>>       case KVM_REG_ARM_TIMER_CVAL:
>>               return vtimer->cnt_cval;
>>       }
>> @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
>>
>>  void kvm_timer_init(struct kvm *kvm)
>>  {
>> -     kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>> +     spin_lock_init(&kvm->arch.cntvoff_lock);
>>  }
>>
>>  /*
>> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
>> index 0cf0895..4734915 100644
>> --- a/virt/kvm/arm/hyp/timer-sr.c
>> +++ b/virt/kvm/arm/hyp/timer-sr.c
>> @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>>
>>  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>>  {
>> -     struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>       struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 val;
>> @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>>       }
>>
>>       if (timer->enabled) {
>> -             write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
>> +             write_sysreg(vtimer->cntvoff, cntvoff_el2);
>>               write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
>>               isb();
>>               write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 17:58       ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 17:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> Make cntvoff per each timer context. This is helpful to abstract kvm
>> timer functions to work with timer context without considering timer
>> types (e.g. physical timer or virtual timer).
>>
>> This also would pave the way for ever doing adjustments of the cntvoff
>> on a per-CPU basis if that should ever make sense.
>>
>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> ---
>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index d5423ab..f5456a9 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>       /* The last vcpu id that ran on each physical CPU */
>>       int __percpu *last_vcpu_ran;
>>
>> -     /* Timer */
>> -     struct arch_timer_kvm   timer;
>> -
>>       /*
>>        * Anything that is not used directly from assembly code goes
>>        * here.
>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>       /* Stage-2 page table */
>>       pgd_t *pgd;
>>
>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>> +     spinlock_t cntvoff_lock;
>
> Is there any condition where we need this to be a spinlock? I would have
> thought that a mutex should have been enough, as this should only be
> updated on migration or initialization. Not that it matters much in this
> case, but I wondered if there is something I'm missing.
>
>> +
>>       /* Interrupt controller */
>>       struct vgic_dist        vgic;
>>       int max_vcpus;
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index e505038..23749a8 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>       /* Interrupt controller */
>>       struct vgic_dist        vgic;
>>
>> -     /* Timer */
>> -     struct arch_timer_kvm   timer;
>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>> +     spinlock_t cntvoff_lock;
>>  };
>>
>>  #define KVM_NR_MEM_OBJS     40
>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>> index daad3c1..1b9c988 100644
>> --- a/include/kvm/arm_arch_timer.h
>> +++ b/include/kvm/arm_arch_timer.h
>> @@ -23,11 +23,6 @@
>>  #include <linux/hrtimer.h>
>>  #include <linux/workqueue.h>
>>
>> -struct arch_timer_kvm {
>> -     /* Virtual offset */
>> -     u64                     cntvoff;
>> -};
>> -
>>  struct arch_timer_context {
>>       /* Registers: control register, timer value */
>>       u32                             cnt_ctl;
>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>
>>       /* Active IRQ state caching */
>>       bool                            active_cleared_last;
>> +
>> +     /* Virtual offset */
>> +     u64                     cntvoff;
>>  };
>>
>>  struct arch_timer_cpu {
>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> index 6740efa..fa4c042 100644
>> --- a/virt/kvm/arm/arch_timer.c
>> +++ b/virt/kvm/arm/arch_timer.c
>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>  {
>>       u64 cval, now;
>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>
>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +     cval = vtimer->cnt_cval;
>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>
>>       if (now < cval) {
>>               u64 ns;
>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>               return false;
>>
>>       cval = vtimer->cnt_cval;
>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>
>>       return cval <= now;
>>  }
>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>       return 0;
>>  }
>>
>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>
> Arguably, this acts on the VM itself and not a single vcpu. maybe you
> should consider passing the struct kvm pointer to reflect this.
>

Yes, that would be better.

>> +{
>> +     int i;
>> +
>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>> +}
>> +
>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>  {
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>
>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>
> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
> be welcome (this is not a change in semantics, but it was never obvious
> in the existing code).

I'll add a comment. In fact, I was told to make cntvoff synchronized
across all the vcpus, but I'm afraid that I understand why. Could you
explain me where this constraint comes from?

>
>> +
>>       INIT_WORK(&timer->expired, kvm_timer_inject_irq_work);
>>       hrtimer_init(&timer->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
>>       timer->timer.function = kvm_timer_expire;
>> @@ -376,7 +390,7 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
>>               vtimer->cnt_ctl = value;
>>               break;
>>       case KVM_REG_ARM_TIMER_CNT:
>> -             vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
>> +             update_vtimer_cntvoff(vcpu, kvm_phys_timer_read() - value);
>>               break;
>>       case KVM_REG_ARM_TIMER_CVAL:
>>               vtimer->cnt_cval = value;
>> @@ -397,7 +411,7 @@ u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
>>       case KVM_REG_ARM_TIMER_CTL:
>>               return vtimer->cnt_ctl;
>>       case KVM_REG_ARM_TIMER_CNT:
>> -             return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>> +             return kvm_phys_timer_read() - vtimer->cntvoff;
>>       case KVM_REG_ARM_TIMER_CVAL:
>>               return vtimer->cnt_cval;
>>       }
>> @@ -511,7 +525,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
>>
>>  void kvm_timer_init(struct kvm *kvm)
>>  {
>> -     kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>> +     spin_lock_init(&kvm->arch.cntvoff_lock);
>>  }
>>
>>  /*
>> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
>> index 0cf0895..4734915 100644
>> --- a/virt/kvm/arm/hyp/timer-sr.c
>> +++ b/virt/kvm/arm/hyp/timer-sr.c
>> @@ -53,7 +53,6 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>>
>>  void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>>  {
>> -     struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>       struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>       u64 val;
>> @@ -71,7 +70,7 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>>       }
>>
>>       if (timer->enabled) {
>> -             write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
>> +             write_sysreg(vtimer->cntvoff, cntvoff_el2);
>>               write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
>>               isb();
>>               write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-30 17:58       ` Jintack Lim
  (?)
@ 2017-01-30 18:05         ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 18:05 UTC (permalink / raw)
  To: Jintack Lim
  Cc: Paolo Bonzini, Radim Krčmář,
	Christoffer Dall, linux, Catalin Marinas, Will Deacon,
	Andre Przywara, KVM General, arm-mail-list, kvmarm,
	lkml - Kernel Mailing List

On 30/01/17 17:58, Jintack Lim wrote:
> On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>> timer functions to work with timer context without considering timer
>>> types (e.g. physical timer or virtual timer).
>>>
>>> This also would pave the way for ever doing adjustments of the cntvoff
>>> on a per-CPU basis if that should ever make sense.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index d5423ab..f5456a9 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>       /* The last vcpu id that ran on each physical CPU */
>>>       int __percpu *last_vcpu_ran;
>>>
>>> -     /* Timer */
>>> -     struct arch_timer_kvm   timer;
>>> -
>>>       /*
>>>        * Anything that is not used directly from assembly code goes
>>>        * here.
>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>       /* Stage-2 page table */
>>>       pgd_t *pgd;
>>>
>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +     spinlock_t cntvoff_lock;
>>
>> Is there any condition where we need this to be a spinlock? I would have
>> thought that a mutex should have been enough, as this should only be
>> updated on migration or initialization. Not that it matters much in this
>> case, but I wondered if there is something I'm missing.
>>
>>> +
>>>       /* Interrupt controller */
>>>       struct vgic_dist        vgic;
>>>       int max_vcpus;
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index e505038..23749a8 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>>       /* Interrupt controller */
>>>       struct vgic_dist        vgic;
>>>
>>> -     /* Timer */
>>> -     struct arch_timer_kvm   timer;
>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +     spinlock_t cntvoff_lock;
>>>  };
>>>
>>>  #define KVM_NR_MEM_OBJS     40
>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>> index daad3c1..1b9c988 100644
>>> --- a/include/kvm/arm_arch_timer.h
>>> +++ b/include/kvm/arm_arch_timer.h
>>> @@ -23,11 +23,6 @@
>>>  #include <linux/hrtimer.h>
>>>  #include <linux/workqueue.h>
>>>
>>> -struct arch_timer_kvm {
>>> -     /* Virtual offset */
>>> -     u64                     cntvoff;
>>> -};
>>> -
>>>  struct arch_timer_context {
>>>       /* Registers: control register, timer value */
>>>       u32                             cnt_ctl;
>>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>>
>>>       /* Active IRQ state caching */
>>>       bool                            active_cleared_last;
>>> +
>>> +     /* Virtual offset */
>>> +     u64                     cntvoff;
>>>  };
>>>
>>>  struct arch_timer_cpu {
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 6740efa..fa4c042 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>>  {
>>>       u64 cval, now;
>>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>>
>>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>> +     cval = vtimer->cnt_cval;
>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>
>>>       if (now < cval) {
>>>               u64 ns;
>>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>>               return false;
>>>
>>>       cval = vtimer->cnt_cval;
>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>
>>>       return cval <= now;
>>>  }
>>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>       return 0;
>>>  }
>>>
>>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>>
>> Arguably, this acts on the VM itself and not a single vcpu. maybe you
>> should consider passing the struct kvm pointer to reflect this.
>>
> 
> Yes, that would be better.
> 
>>> +{
>>> +     int i;
>>> +
>>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>>> +}
>>> +
>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>  {
>>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>
>>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>
>> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
>> be welcome (this is not a change in semantics, but it was never obvious
>> in the existing code).
> 
> I'll add a comment. In fact, I was told to make cntvoff synchronized
> across all the vcpus, but I'm afraid that I understand why. Could you
> explain me where this constraint comes from?

The virtual counter is the only one a guest can rely on (as the physical
one is disabled). So we must present to the guest a view of time that is
uniform across CPUs. If we allow CNTVOFF to vary across CPUs, time
starts fluctuating when we migrate a process from a vcpu to another, and
Linux gets *really* unhappy.

An easy fix for this is to make CNTVOFF a VM-global value, ensuring that
all the CPUs see the same counter values at the same time.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 18:05         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 18:05 UTC (permalink / raw)
  To: Jintack Lim
  Cc: KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

On 30/01/17 17:58, Jintack Lim wrote:
> On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>> timer functions to work with timer context without considering timer
>>> types (e.g. physical timer or virtual timer).
>>>
>>> This also would pave the way for ever doing adjustments of the cntvoff
>>> on a per-CPU basis if that should ever make sense.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index d5423ab..f5456a9 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>       /* The last vcpu id that ran on each physical CPU */
>>>       int __percpu *last_vcpu_ran;
>>>
>>> -     /* Timer */
>>> -     struct arch_timer_kvm   timer;
>>> -
>>>       /*
>>>        * Anything that is not used directly from assembly code goes
>>>        * here.
>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>       /* Stage-2 page table */
>>>       pgd_t *pgd;
>>>
>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +     spinlock_t cntvoff_lock;
>>
>> Is there any condition where we need this to be a spinlock? I would have
>> thought that a mutex should have been enough, as this should only be
>> updated on migration or initialization. Not that it matters much in this
>> case, but I wondered if there is something I'm missing.
>>
>>> +
>>>       /* Interrupt controller */
>>>       struct vgic_dist        vgic;
>>>       int max_vcpus;
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index e505038..23749a8 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>>       /* Interrupt controller */
>>>       struct vgic_dist        vgic;
>>>
>>> -     /* Timer */
>>> -     struct arch_timer_kvm   timer;
>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +     spinlock_t cntvoff_lock;
>>>  };
>>>
>>>  #define KVM_NR_MEM_OBJS     40
>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>> index daad3c1..1b9c988 100644
>>> --- a/include/kvm/arm_arch_timer.h
>>> +++ b/include/kvm/arm_arch_timer.h
>>> @@ -23,11 +23,6 @@
>>>  #include <linux/hrtimer.h>
>>>  #include <linux/workqueue.h>
>>>
>>> -struct arch_timer_kvm {
>>> -     /* Virtual offset */
>>> -     u64                     cntvoff;
>>> -};
>>> -
>>>  struct arch_timer_context {
>>>       /* Registers: control register, timer value */
>>>       u32                             cnt_ctl;
>>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>>
>>>       /* Active IRQ state caching */
>>>       bool                            active_cleared_last;
>>> +
>>> +     /* Virtual offset */
>>> +     u64                     cntvoff;
>>>  };
>>>
>>>  struct arch_timer_cpu {
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 6740efa..fa4c042 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>>  {
>>>       u64 cval, now;
>>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>>
>>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>> +     cval = vtimer->cnt_cval;
>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>
>>>       if (now < cval) {
>>>               u64 ns;
>>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>>               return false;
>>>
>>>       cval = vtimer->cnt_cval;
>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>
>>>       return cval <= now;
>>>  }
>>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>       return 0;
>>>  }
>>>
>>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>>
>> Arguably, this acts on the VM itself and not a single vcpu. maybe you
>> should consider passing the struct kvm pointer to reflect this.
>>
> 
> Yes, that would be better.
> 
>>> +{
>>> +     int i;
>>> +
>>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>>> +}
>>> +
>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>  {
>>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>
>>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>
>> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
>> be welcome (this is not a change in semantics, but it was never obvious
>> in the existing code).
> 
> I'll add a comment. In fact, I was told to make cntvoff synchronized
> across all the vcpus, but I'm afraid that I understand why. Could you
> explain me where this constraint comes from?

The virtual counter is the only one a guest can rely on (as the physical
one is disabled). So we must present to the guest a view of time that is
uniform across CPUs. If we allow CNTVOFF to vary across CPUs, time
starts fluctuating when we migrate a process from a vcpu to another, and
Linux gets *really* unhappy.

An easy fix for this is to make CNTVOFF a VM-global value, ensuring that
all the CPUs see the same counter values at the same time.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 18:05         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 18:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 17:58, Jintack Lim wrote:
> On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>> timer functions to work with timer context without considering timer
>>> types (e.g. physical timer or virtual timer).
>>>
>>> This also would pave the way for ever doing adjustments of the cntvoff
>>> on a per-CPU basis if that should ever make sense.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index d5423ab..f5456a9 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>       /* The last vcpu id that ran on each physical CPU */
>>>       int __percpu *last_vcpu_ran;
>>>
>>> -     /* Timer */
>>> -     struct arch_timer_kvm   timer;
>>> -
>>>       /*
>>>        * Anything that is not used directly from assembly code goes
>>>        * here.
>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>       /* Stage-2 page table */
>>>       pgd_t *pgd;
>>>
>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +     spinlock_t cntvoff_lock;
>>
>> Is there any condition where we need this to be a spinlock? I would have
>> thought that a mutex should have been enough, as this should only be
>> updated on migration or initialization. Not that it matters much in this
>> case, but I wondered if there is something I'm missing.
>>
>>> +
>>>       /* Interrupt controller */
>>>       struct vgic_dist        vgic;
>>>       int max_vcpus;
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index e505038..23749a8 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>>       /* Interrupt controller */
>>>       struct vgic_dist        vgic;
>>>
>>> -     /* Timer */
>>> -     struct arch_timer_kvm   timer;
>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>> +     spinlock_t cntvoff_lock;
>>>  };
>>>
>>>  #define KVM_NR_MEM_OBJS     40
>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>> index daad3c1..1b9c988 100644
>>> --- a/include/kvm/arm_arch_timer.h
>>> +++ b/include/kvm/arm_arch_timer.h
>>> @@ -23,11 +23,6 @@
>>>  #include <linux/hrtimer.h>
>>>  #include <linux/workqueue.h>
>>>
>>> -struct arch_timer_kvm {
>>> -     /* Virtual offset */
>>> -     u64                     cntvoff;
>>> -};
>>> -
>>>  struct arch_timer_context {
>>>       /* Registers: control register, timer value */
>>>       u32                             cnt_ctl;
>>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>>
>>>       /* Active IRQ state caching */
>>>       bool                            active_cleared_last;
>>> +
>>> +     /* Virtual offset */
>>> +     u64                     cntvoff;
>>>  };
>>>
>>>  struct arch_timer_cpu {
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 6740efa..fa4c042 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>>  {
>>>       u64 cval, now;
>>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>>
>>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>> +     cval = vtimer->cnt_cval;
>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>
>>>       if (now < cval) {
>>>               u64 ns;
>>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>>               return false;
>>>
>>>       cval = vtimer->cnt_cval;
>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>
>>>       return cval <= now;
>>>  }
>>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>       return 0;
>>>  }
>>>
>>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>>
>> Arguably, this acts on the VM itself and not a single vcpu. maybe you
>> should consider passing the struct kvm pointer to reflect this.
>>
> 
> Yes, that would be better.
> 
>>> +{
>>> +     int i;
>>> +
>>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>>> +}
>>> +
>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>  {
>>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>
>>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>
>> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
>> be welcome (this is not a change in semantics, but it was never obvious
>> in the existing code).
> 
> I'll add a comment. In fact, I was told to make cntvoff synchronized
> across all the vcpus, but I'm afraid that I understand why. Could you
> explain me where this constraint comes from?

The virtual counter is the only one a guest can rely on (as the physical
one is disabled). So we must present to the guest a view of time that is
uniform across CPUs. If we allow CNTVOFF to vary across CPUs, time
starts fluctuating when we migrate a process from a vcpu to another, and
Linux gets *really* unhappy.

An easy fix for this is to make CNTVOFF a VM-global value, ensuring that
all the CPUs see the same counter values at the same time.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-30 17:50         ` Marc Zyngier
@ 2017-01-30 18:41           ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 18:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> On 30/01/17 15:02, Christoffer Dall wrote:
> > On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>> Now that we maintain the EL1 physical timer register states of VMs,
> >>> update the physical timer interrupt level along with the virtual one.
> >>>
> >>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>> timer, so we call a proper vgic function.
> >>>
> >>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>> ---
> >>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>  1 file changed, 20 insertions(+)
> >>>
> >>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>> index 0f6e935..3b6bd50 100644
> >>> --- a/virt/kvm/arm/arch_timer.c
> >>> +++ b/virt/kvm/arm/arch_timer.c
> >>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>  	WARN_ON(ret);
> >>>  }
> >>>  
> >>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>> +				 struct arch_timer_context *timer)
> >>> +{
> >>> +	int ret;
> >>> +
> >>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>
> >> Although I've added my fair share of BUG_ON() in the code base, I've
> >> since reconsidered my position. If we get in a situation where the vgic
> >> is not initialized, maybe it would be better to just WARN_ON and return
> >> early rather than killing the whole box. Thoughts?
> >>
> > 
> > The distinction to me is whether this will cause fatal crashes or
> > exploits down the road if we're working on uninitialized data.  If all
> > that can happen if the vgic is not initialized, is that the guest
> > doesn't see interrupts, for example, then a WARN_ON is appropriate.
> > 
> > Which is the case here?
> > 
> > That being said, do we need this at all?  This is in the critial path
> > and is actually measurable (I know this from my work on the other timer
> > series), so it's better to get rid of it if we can.  Can we simply
> > convince ourselves this will never happen, and is the code ever likely
> > to change so that it gets called with the vgic disabled later?
> 
> That'd be the best course of action. I remember us reworking some of
> that in the now defunct vgic-less series. Maybe we could salvage that
> code, if only for the time we spent on it...
> 
Ah, we never merged it?  Were we waiting on a userspace implementation
or agreement on the ABI?

There was definitely a useful cleanup with the whole enabled flag thing
on the timer I remember.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 18:41           ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 18:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> On 30/01/17 15:02, Christoffer Dall wrote:
> > On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>> Now that we maintain the EL1 physical timer register states of VMs,
> >>> update the physical timer interrupt level along with the virtual one.
> >>>
> >>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>> timer, so we call a proper vgic function.
> >>>
> >>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>> ---
> >>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>  1 file changed, 20 insertions(+)
> >>>
> >>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>> index 0f6e935..3b6bd50 100644
> >>> --- a/virt/kvm/arm/arch_timer.c
> >>> +++ b/virt/kvm/arm/arch_timer.c
> >>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>  	WARN_ON(ret);
> >>>  }
> >>>  
> >>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>> +				 struct arch_timer_context *timer)
> >>> +{
> >>> +	int ret;
> >>> +
> >>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>
> >> Although I've added my fair share of BUG_ON() in the code base, I've
> >> since reconsidered my position. If we get in a situation where the vgic
> >> is not initialized, maybe it would be better to just WARN_ON and return
> >> early rather than killing the whole box. Thoughts?
> >>
> > 
> > The distinction to me is whether this will cause fatal crashes or
> > exploits down the road if we're working on uninitialized data.  If all
> > that can happen if the vgic is not initialized, is that the guest
> > doesn't see interrupts, for example, then a WARN_ON is appropriate.
> > 
> > Which is the case here?
> > 
> > That being said, do we need this at all?  This is in the critial path
> > and is actually measurable (I know this from my work on the other timer
> > series), so it's better to get rid of it if we can.  Can we simply
> > convince ourselves this will never happen, and is the code ever likely
> > to change so that it gets called with the vgic disabled later?
> 
> That'd be the best course of action. I remember us reworking some of
> that in the now defunct vgic-less series. Maybe we could salvage that
> code, if only for the time we spent on it...
> 
Ah, we never merged it?  Were we waiting on a userspace implementation
or agreement on the ABI?

There was definitely a useful cleanup with the whole enabled flag thing
on the timer I remember.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
  2017-01-30 18:05         ` Marc Zyngier
@ 2017-01-30 18:45           ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 18:45 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Paolo Bonzini, Radim Krčmář,
	Christoffer Dall, linux, Catalin Marinas, Will Deacon,
	Andre Przywara, KVM General, arm-mail-list, kvmarm,
	lkml - Kernel Mailing List

On Mon, Jan 30, 2017 at 1:05 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 30/01/17 17:58, Jintack Lim wrote:
>> On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>>> timer functions to work with timer context without considering timer
>>>> types (e.g. physical timer or virtual timer).
>>>>
>>>> This also would pave the way for ever doing adjustments of the cntvoff
>>>> on a per-CPU basis if that should ever make sense.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> ---
>>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>> index d5423ab..f5456a9 100644
>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>>       /* The last vcpu id that ran on each physical CPU */
>>>>       int __percpu *last_vcpu_ran;
>>>>
>>>> -     /* Timer */
>>>> -     struct arch_timer_kvm   timer;
>>>> -
>>>>       /*
>>>>        * Anything that is not used directly from assembly code goes
>>>>        * here.
>>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>>       /* Stage-2 page table */
>>>>       pgd_t *pgd;
>>>>
>>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +     spinlock_t cntvoff_lock;
>>>
>>> Is there any condition where we need this to be a spinlock? I would have
>>> thought that a mutex should have been enough, as this should only be
>>> updated on migration or initialization. Not that it matters much in this
>>> case, but I wondered if there is something I'm missing.
>>>
>>>> +
>>>>       /* Interrupt controller */
>>>>       struct vgic_dist        vgic;
>>>>       int max_vcpus;
>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>> index e505038..23749a8 100644
>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>>>       /* Interrupt controller */
>>>>       struct vgic_dist        vgic;
>>>>
>>>> -     /* Timer */
>>>> -     struct arch_timer_kvm   timer;
>>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +     spinlock_t cntvoff_lock;
>>>>  };
>>>>
>>>>  #define KVM_NR_MEM_OBJS     40
>>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>>> index daad3c1..1b9c988 100644
>>>> --- a/include/kvm/arm_arch_timer.h
>>>> +++ b/include/kvm/arm_arch_timer.h
>>>> @@ -23,11 +23,6 @@
>>>>  #include <linux/hrtimer.h>
>>>>  #include <linux/workqueue.h>
>>>>
>>>> -struct arch_timer_kvm {
>>>> -     /* Virtual offset */
>>>> -     u64                     cntvoff;
>>>> -};
>>>> -
>>>>  struct arch_timer_context {
>>>>       /* Registers: control register, timer value */
>>>>       u32                             cnt_ctl;
>>>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>>>
>>>>       /* Active IRQ state caching */
>>>>       bool                            active_cleared_last;
>>>> +
>>>> +     /* Virtual offset */
>>>> +     u64                     cntvoff;
>>>>  };
>>>>
>>>>  struct arch_timer_cpu {
>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>> index 6740efa..fa4c042 100644
>>>> --- a/virt/kvm/arm/arch_timer.c
>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>>>  {
>>>>       u64 cval, now;
>>>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>>>
>>>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>>> +     cval = vtimer->cnt_cval;
>>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>>
>>>>       if (now < cval) {
>>>>               u64 ns;
>>>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>>>               return false;
>>>>
>>>>       cval = vtimer->cnt_cval;
>>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>>
>>>>       return cval <= now;
>>>>  }
>>>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>>       return 0;
>>>>  }
>>>>
>>>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>>>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>>>
>>> Arguably, this acts on the VM itself and not a single vcpu. maybe you
>>> should consider passing the struct kvm pointer to reflect this.
>>>
>>
>> Yes, that would be better.
>>
>>>> +{
>>>> +     int i;
>>>> +
>>>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>>>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>>>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>>>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>>>> +}
>>>> +
>>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>>  {
>>>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>>
>>>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>>
>>> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
>>> be welcome (this is not a change in semantics, but it was never obvious
>>> in the existing code).
>>
>> I'll add a comment. In fact, I was told to make cntvoff synchronized
>> across all the vcpus, but I'm afraid that I understand why. Could you
>> explain me where this constraint comes from?
>
> The virtual counter is the only one a guest can rely on (as the physical
> one is disabled). So we must present to the guest a view of time that is
> uniform across CPUs. If we allow CNTVOFF to vary across CPUs, time
> starts fluctuating when we migrate a process from a vcpu to another, and
> Linux gets *really* unhappy.

Ah, that makes sense to me. Thanks a lot.

>
> An easy fix for this is to make CNTVOFF a VM-global value, ensuring that
> all the CPUs see the same counter values at the same time.
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny...
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context
@ 2017-01-30 18:45           ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 18:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 30, 2017 at 1:05 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 30/01/17 17:58, Jintack Lim wrote:
>> On Sun, Jan 29, 2017 at 6:54 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>> On Fri, Jan 27 2017 at 01:04:52 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>> Make cntvoff per each timer context. This is helpful to abstract kvm
>>>> timer functions to work with timer context without considering timer
>>>> types (e.g. physical timer or virtual timer).
>>>>
>>>> This also would pave the way for ever doing adjustments of the cntvoff
>>>> on a per-CPU basis if that should ever make sense.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> ---
>>>>  arch/arm/include/asm/kvm_host.h   |  6 +++---
>>>>  arch/arm64/include/asm/kvm_host.h |  4 ++--
>>>>  include/kvm/arm_arch_timer.h      |  8 +++-----
>>>>  virt/kvm/arm/arch_timer.c         | 26 ++++++++++++++++++++------
>>>>  virt/kvm/arm/hyp/timer-sr.c       |  3 +--
>>>>  5 files changed, 29 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>> index d5423ab..f5456a9 100644
>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>> @@ -60,9 +60,6 @@ struct kvm_arch {
>>>>       /* The last vcpu id that ran on each physical CPU */
>>>>       int __percpu *last_vcpu_ran;
>>>>
>>>> -     /* Timer */
>>>> -     struct arch_timer_kvm   timer;
>>>> -
>>>>       /*
>>>>        * Anything that is not used directly from assembly code goes
>>>>        * here.
>>>> @@ -75,6 +72,9 @@ struct kvm_arch {
>>>>       /* Stage-2 page table */
>>>>       pgd_t *pgd;
>>>>
>>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +     spinlock_t cntvoff_lock;
>>>
>>> Is there any condition where we need this to be a spinlock? I would have
>>> thought that a mutex should have been enough, as this should only be
>>> updated on migration or initialization. Not that it matters much in this
>>> case, but I wondered if there is something I'm missing.
>>>
>>>> +
>>>>       /* Interrupt controller */
>>>>       struct vgic_dist        vgic;
>>>>       int max_vcpus;
>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>> index e505038..23749a8 100644
>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>> @@ -71,8 +71,8 @@ struct kvm_arch {
>>>>       /* Interrupt controller */
>>>>       struct vgic_dist        vgic;
>>>>
>>>> -     /* Timer */
>>>> -     struct arch_timer_kvm   timer;
>>>> +     /* A lock to synchronize cntvoff among all vtimer context of vcpus */
>>>> +     spinlock_t cntvoff_lock;
>>>>  };
>>>>
>>>>  #define KVM_NR_MEM_OBJS     40
>>>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
>>>> index daad3c1..1b9c988 100644
>>>> --- a/include/kvm/arm_arch_timer.h
>>>> +++ b/include/kvm/arm_arch_timer.h
>>>> @@ -23,11 +23,6 @@
>>>>  #include <linux/hrtimer.h>
>>>>  #include <linux/workqueue.h>
>>>>
>>>> -struct arch_timer_kvm {
>>>> -     /* Virtual offset */
>>>> -     u64                     cntvoff;
>>>> -};
>>>> -
>>>>  struct arch_timer_context {
>>>>       /* Registers: control register, timer value */
>>>>       u32                             cnt_ctl;
>>>> @@ -38,6 +33,9 @@ struct arch_timer_context {
>>>>
>>>>       /* Active IRQ state caching */
>>>>       bool                            active_cleared_last;
>>>> +
>>>> +     /* Virtual offset */
>>>> +     u64                     cntvoff;
>>>>  };
>>>>
>>>>  struct arch_timer_cpu {
>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>> index 6740efa..fa4c042 100644
>>>> --- a/virt/kvm/arm/arch_timer.c
>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>> @@ -101,9 +101,10 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
>>>>  static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
>>>>  {
>>>>       u64 cval, now;
>>>> +     struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
>>>>
>>>> -     cval = vcpu_vtimer(vcpu)->cnt_cval;
>>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>>> +     cval = vtimer->cnt_cval;
>>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>>
>>>>       if (now < cval) {
>>>>               u64 ns;
>>>> @@ -159,7 +160,7 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
>>>>               return false;
>>>>
>>>>       cval = vtimer->cnt_cval;
>>>> -     now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
>>>> +     now = kvm_phys_timer_read() - vtimer->cntvoff;
>>>>
>>>>       return cval <= now;
>>>>  }
>>>> @@ -353,10 +354,23 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
>>>>       return 0;
>>>>  }
>>>>
>>>> +/* Make the updates of cntvoff for all vtimer contexts atomic */
>>>> +static void update_vtimer_cntvoff(struct kvm_vcpu *vcpu, u64 cntvoff)
>>>
>>> Arguably, this acts on the VM itself and not a single vcpu. maybe you
>>> should consider passing the struct kvm pointer to reflect this.
>>>
>>
>> Yes, that would be better.
>>
>>>> +{
>>>> +     int i;
>>>> +
>>>> +     spin_lock(&vcpu->kvm->arch.cntvoff_lock);
>>>> +     kvm_for_each_vcpu(i, vcpu, vcpu->kvm)
>>>> +             vcpu_vtimer(vcpu)->cntvoff = cntvoff;
>>>> +     spin_unlock(&vcpu->kvm->arch.cntvoff_lock);
>>>> +}
>>>> +
>>>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
>>>>  {
>>>>       struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>>>>
>>>> +     update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
>>>
>>> Maybe a comment indicating that we recompute CNTVOFF for all vcpus would
>>> be welcome (this is not a change in semantics, but it was never obvious
>>> in the existing code).
>>
>> I'll add a comment. In fact, I was told to make cntvoff synchronized
>> across all the vcpus, but I'm afraid that I understand why. Could you
>> explain me where this constraint comes from?
>
> The virtual counter is the only one a guest can rely on (as the physical
> one is disabled). So we must present to the guest a view of time that is
> uniform across CPUs. If we allow CNTVOFF to vary across CPUs, time
> starts fluctuating when we migrate a process from a vcpu to another, and
> Linux gets *really* unhappy.

Ah, that makes sense to me. Thanks a lot.

>
> An easy fix for this is to make CNTVOFF a VM-global value, ensuring that
> all the CPUs see the same counter values at the same time.
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny...
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-30 18:41           ` Christoffer Dall
  (?)
@ 2017-01-30 18:48             ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 18:48 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On 30/01/17 18:41, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
>> On 30/01/17 15:02, Christoffer Dall wrote:
>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>
>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>> timer, so we call a proper vgic function.
>>>>>
>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>> ---
>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>  1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 0f6e935..3b6bd50 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>  	WARN_ON(ret);
>>>>>  }
>>>>>  
>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>> +				 struct arch_timer_context *timer)
>>>>> +{
>>>>> +	int ret;
>>>>> +
>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>
>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>> since reconsidered my position. If we get in a situation where the vgic
>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>> early rather than killing the whole box. Thoughts?
>>>>
>>>
>>> The distinction to me is whether this will cause fatal crashes or
>>> exploits down the road if we're working on uninitialized data.  If all
>>> that can happen if the vgic is not initialized, is that the guest
>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
>>>
>>> Which is the case here?
>>>
>>> That being said, do we need this at all?  This is in the critial path
>>> and is actually measurable (I know this from my work on the other timer
>>> series), so it's better to get rid of it if we can.  Can we simply
>>> convince ourselves this will never happen, and is the code ever likely
>>> to change so that it gets called with the vgic disabled later?
>>
>> That'd be the best course of action. I remember us reworking some of
>> that in the now defunct vgic-less series. Maybe we could salvage that
>> code, if only for the time we spent on it...
>>
> Ah, we never merged it?  Were we waiting on a userspace implementation
> or agreement on the ABI?

We were waiting on the userspace side to be respun against the latest
API, and there were some comments from Peter (IIRC) about supporting
PPIs in general (the other timers and the PMU, for example).

None of that happened, as the most vocal proponent of the series
apparently lost interest.

> There was definitely a useful cleanup with the whole enabled flag thing
> on the timer I remember.

Indeed. We should at least try to resurrect that bit.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 18:48             ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 18:48 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On 30/01/17 18:41, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
>> On 30/01/17 15:02, Christoffer Dall wrote:
>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>
>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>> timer, so we call a proper vgic function.
>>>>>
>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>> ---
>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>  1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 0f6e935..3b6bd50 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>  	WARN_ON(ret);
>>>>>  }
>>>>>  
>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>> +				 struct arch_timer_context *timer)
>>>>> +{
>>>>> +	int ret;
>>>>> +
>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>
>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>> since reconsidered my position. If we get in a situation where the vgic
>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>> early rather than killing the whole box. Thoughts?
>>>>
>>>
>>> The distinction to me is whether this will cause fatal crashes or
>>> exploits down the road if we're working on uninitialized data.  If all
>>> that can happen if the vgic is not initialized, is that the guest
>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
>>>
>>> Which is the case here?
>>>
>>> That being said, do we need this at all?  This is in the critial path
>>> and is actually measurable (I know this from my work on the other timer
>>> series), so it's better to get rid of it if we can.  Can we simply
>>> convince ourselves this will never happen, and is the code ever likely
>>> to change so that it gets called with the vgic disabled later?
>>
>> That'd be the best course of action. I remember us reworking some of
>> that in the now defunct vgic-less series. Maybe we could salvage that
>> code, if only for the time we spent on it...
>>
> Ah, we never merged it?  Were we waiting on a userspace implementation
> or agreement on the ABI?

We were waiting on the userspace side to be respun against the latest
API, and there were some comments from Peter (IIRC) about supporting
PPIs in general (the other timers and the PMU, for example).

None of that happened, as the most vocal proponent of the series
apparently lost interest.

> There was definitely a useful cleanup with the whole enabled flag thing
> on the timer I remember.

Indeed. We should at least try to resurrect that bit.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 18:48             ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-30 18:48 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 18:41, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
>> On 30/01/17 15:02, Christoffer Dall wrote:
>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>
>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>> timer, so we call a proper vgic function.
>>>>>
>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>> ---
>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>  1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 0f6e935..3b6bd50 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>  	WARN_ON(ret);
>>>>>  }
>>>>>  
>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>> +				 struct arch_timer_context *timer)
>>>>> +{
>>>>> +	int ret;
>>>>> +
>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>
>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>> since reconsidered my position. If we get in a situation where the vgic
>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>> early rather than killing the whole box. Thoughts?
>>>>
>>>
>>> The distinction to me is whether this will cause fatal crashes or
>>> exploits down the road if we're working on uninitialized data.  If all
>>> that can happen if the vgic is not initialized, is that the guest
>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
>>>
>>> Which is the case here?
>>>
>>> That being said, do we need this at all?  This is in the critial path
>>> and is actually measurable (I know this from my work on the other timer
>>> series), so it's better to get rid of it if we can.  Can we simply
>>> convince ourselves this will never happen, and is the code ever likely
>>> to change so that it gets called with the vgic disabled later?
>>
>> That'd be the best course of action. I remember us reworking some of
>> that in the now defunct vgic-less series. Maybe we could salvage that
>> code, if only for the time we spent on it...
>>
> Ah, we never merged it?  Were we waiting on a userspace implementation
> or agreement on the ABI?

We were waiting on the userspace side to be respun against the latest
API, and there were some comments from Peter (IIRC) about supporting
PPIs in general (the other timers and the PMU, for example).

None of that happened, as the most vocal proponent of the series
apparently lost interest.

> There was definitely a useful cleanup with the whole enabled flag thing
> on the timer I remember.

Indeed. We should at least try to resurrect that bit.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 00/10] Provide the EL1 physical timer to the VM
  2017-01-29 15:55   ` Marc Zyngier
  (?)
@ 2017-01-30 19:02     ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 19:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Paolo Bonzini, Radim Krčmář,
	Christoffer Dall, linux, Catalin Marinas, Will Deacon,
	Andre Przywara, KVM General, arm-mail-list, kvmarm,
	lkml - Kernel Mailing List

Hi Marc,

On Sun, Jan 29, 2017 at 10:55 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Hi Jintack,
>
> On Fri, Jan 27 2017 at 01:04:50 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> The ARM architecture defines the EL1 physical timer and the virtual timer,
>> and it is reasonable for an OS to expect to be able to access both.
>> However, the current KVM implementation does not provide the EL1 physical
>> timer to VMs but terminates VMs on access to the timer.
>>
>> This patch series enables VMs to use the EL1 physical timer through
>> trap-and-emulate.  The KVM host emulates each EL1 physical timer register
>> access and sets up the background timer accordingly.  When the background
>> timer expires, the KVM host injects EL1 physical timer interrupts to the
>> VM.  Alternatively, it's also possible to allow VMs to access the EL1
>> physical timer without trapping.  However, this requires somehow using the
>> EL2 physical timer for the Linux host while running the VM instead of the
>> EL1 physical timer.  Right now I just implemented trap-and-emulate because
>> this was straightforward to do, and I leave it to future work to determine
>> if transferring the EL1 physical timer state to the EL2 timer provides any
>> performance benefit.
>>
>> This feature will be useful for any OS that wishes to access the EL1
>> physical timer. Nested virtualization is one of those use cases. A nested
>> hypervisor running inside a VM would think it has full access to the
>> hardware and naturally tries to use the EL1 physical timer as Linux would
>> do. Other nested hypervisors may try to use the EL2 physical timer as Xen
>> would do, but supporting the EL2 physical timer to the VM is out of scope
>> of this patch series. This patch series will make it easy to add the EL2
>> timer support in the future, though.
>>
>> Note that Linux VMs booting in EL1 will be unaffected by this patch series
>> and will continue to use only the virtual timer and this patch series will
>> therefore not introduce any performance degredation as a result of
>> trap-and-emulate.
>
> Thanks for respining this series. Overall, this looks quite good, and
> the couple of comments I have should be easy to address.

Thanks for the review!

>
> My main concern is that we do expose a timer that doesn't hide
> CNTVOFF. I appreciate that that was already the case, since CNTPCT was
> always available (and this avoided trapping the counter), but maybe we
> should have a way for userspace to ask for a mode where CNTPCT=CNTVCT,
> byt trapping the physical counter and taking CNTVOFF in all physical
> timer calculations.

As discussed in the other thread, I think we can expose CNTVOFF to the
guest OS. I have a patch that lets the guest hypervisor observe CNTVCT
= CNTPCT - offset (virtual CNTVOFF_EL2) and I will include it in the
next nesting patch series.

Thanks,
Jintack

>
> I'm pretty sure you've addressed this one way or another in your nested
> virt series, so maybe extracting the relevant patches and adding them on
> top of this series could be an option?
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 00/10] Provide the EL1 physical timer to the VM
@ 2017-01-30 19:02     ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 19:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: KVM General, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, arm-mail-list, Andre Przywara,
	Paolo Bonzini, kvmarm

Hi Marc,

On Sun, Jan 29, 2017 at 10:55 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Hi Jintack,
>
> On Fri, Jan 27 2017 at 01:04:50 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> The ARM architecture defines the EL1 physical timer and the virtual timer,
>> and it is reasonable for an OS to expect to be able to access both.
>> However, the current KVM implementation does not provide the EL1 physical
>> timer to VMs but terminates VMs on access to the timer.
>>
>> This patch series enables VMs to use the EL1 physical timer through
>> trap-and-emulate.  The KVM host emulates each EL1 physical timer register
>> access and sets up the background timer accordingly.  When the background
>> timer expires, the KVM host injects EL1 physical timer interrupts to the
>> VM.  Alternatively, it's also possible to allow VMs to access the EL1
>> physical timer without trapping.  However, this requires somehow using the
>> EL2 physical timer for the Linux host while running the VM instead of the
>> EL1 physical timer.  Right now I just implemented trap-and-emulate because
>> this was straightforward to do, and I leave it to future work to determine
>> if transferring the EL1 physical timer state to the EL2 timer provides any
>> performance benefit.
>>
>> This feature will be useful for any OS that wishes to access the EL1
>> physical timer. Nested virtualization is one of those use cases. A nested
>> hypervisor running inside a VM would think it has full access to the
>> hardware and naturally tries to use the EL1 physical timer as Linux would
>> do. Other nested hypervisors may try to use the EL2 physical timer as Xen
>> would do, but supporting the EL2 physical timer to the VM is out of scope
>> of this patch series. This patch series will make it easy to add the EL2
>> timer support in the future, though.
>>
>> Note that Linux VMs booting in EL1 will be unaffected by this patch series
>> and will continue to use only the virtual timer and this patch series will
>> therefore not introduce any performance degredation as a result of
>> trap-and-emulate.
>
> Thanks for respining this series. Overall, this looks quite good, and
> the couple of comments I have should be easy to address.

Thanks for the review!

>
> My main concern is that we do expose a timer that doesn't hide
> CNTVOFF. I appreciate that that was already the case, since CNTPCT was
> always available (and this avoided trapping the counter), but maybe we
> should have a way for userspace to ask for a mode where CNTPCT=CNTVCT,
> byt trapping the physical counter and taking CNTVOFF in all physical
> timer calculations.

As discussed in the other thread, I think we can expose CNTVOFF to the
guest OS. I have a patch that lets the guest hypervisor observe CNTVCT
= CNTPCT - offset (virtual CNTVOFF_EL2) and I will include it in the
next nesting patch series.

Thanks,
Jintack

>
> I'm pretty sure you've addressed this one way or another in your nested
> virt series, so maybe extracting the relevant patches and adding them on
> top of this series could be an option?
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 00/10] Provide the EL1 physical timer to the VM
@ 2017-01-30 19:02     ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-01-30 19:02 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On Sun, Jan 29, 2017 at 10:55 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Hi Jintack,
>
> On Fri, Jan 27 2017 at 01:04:50 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> The ARM architecture defines the EL1 physical timer and the virtual timer,
>> and it is reasonable for an OS to expect to be able to access both.
>> However, the current KVM implementation does not provide the EL1 physical
>> timer to VMs but terminates VMs on access to the timer.
>>
>> This patch series enables VMs to use the EL1 physical timer through
>> trap-and-emulate.  The KVM host emulates each EL1 physical timer register
>> access and sets up the background timer accordingly.  When the background
>> timer expires, the KVM host injects EL1 physical timer interrupts to the
>> VM.  Alternatively, it's also possible to allow VMs to access the EL1
>> physical timer without trapping.  However, this requires somehow using the
>> EL2 physical timer for the Linux host while running the VM instead of the
>> EL1 physical timer.  Right now I just implemented trap-and-emulate because
>> this was straightforward to do, and I leave it to future work to determine
>> if transferring the EL1 physical timer state to the EL2 timer provides any
>> performance benefit.
>>
>> This feature will be useful for any OS that wishes to access the EL1
>> physical timer. Nested virtualization is one of those use cases. A nested
>> hypervisor running inside a VM would think it has full access to the
>> hardware and naturally tries to use the EL1 physical timer as Linux would
>> do. Other nested hypervisors may try to use the EL2 physical timer as Xen
>> would do, but supporting the EL2 physical timer to the VM is out of scope
>> of this patch series. This patch series will make it easy to add the EL2
>> timer support in the future, though.
>>
>> Note that Linux VMs booting in EL1 will be unaffected by this patch series
>> and will continue to use only the virtual timer and this patch series will
>> therefore not introduce any performance degredation as a result of
>> trap-and-emulate.
>
> Thanks for respining this series. Overall, this looks quite good, and
> the couple of comments I have should be easy to address.

Thanks for the review!

>
> My main concern is that we do expose a timer that doesn't hide
> CNTVOFF. I appreciate that that was already the case, since CNTPCT was
> always available (and this avoided trapping the counter), but maybe we
> should have a way for userspace to ask for a mode where CNTPCT=CNTVCT,
> byt trapping the physical counter and taking CNTVOFF in all physical
> timer calculations.

As discussed in the other thread, I think we can expose CNTVOFF to the
guest OS. I have a patch that lets the guest hypervisor observe CNTVCT
= CNTPCT - offset (virtual CNTVOFF_EL2) and I will include it in the
next nesting patch series.

Thanks,
Jintack

>
> I'm pretty sure you've addressed this one way or another in your nested
> virt series, so maybe extracting the relevant patches and adding them on
> top of this series could be an option?
>
> Thanks,
>
>         M.
> --
> Jazz is not dead. It just smells funny.
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
  2017-01-30 17:44         ` Marc Zyngier
  (?)
@ 2017-01-30 19:04           ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 19:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel, Peter Maydell

On Mon, Jan 30, 2017 at 05:44:20PM +0000, Marc Zyngier wrote:
> On 30/01/17 14:58, Christoffer Dall wrote:
> > On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>> Initialize the emulated EL1 physical timer with the default irq number.
> >>>
> >>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>> ---
> >>>  arch/arm/kvm/reset.c         | 9 ++++++++-
> >>>  arch/arm64/kvm/reset.c       | 9 ++++++++-
> >>>  include/kvm/arm_arch_timer.h | 3 ++-
> >>>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
> >>>  4 files changed, 25 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> >>> index 4b5e802..1da8b2d 100644
> >>> --- a/arch/arm/kvm/reset.c
> >>> +++ b/arch/arm/kvm/reset.c
> >>> @@ -37,6 +37,11 @@
> >>>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
> >>>  };
> >>>  
> >>> +static const struct kvm_irq_level cortexa_ptimer_irq = {
> >>> +	{ .irq = 30 },
> >>> +	.level = 1,
> >>> +};
> >>
> >> At some point, we'll have to make that discoverable/configurable. Maybe
> >> the time for a discoverable arch timer has come (see below).
> >>
> >>> +
> >>>  static const struct kvm_irq_level cortexa_vtimer_irq = {
> >>>  	{ .irq = 27 },
> >>>  	.level = 1,
> >>> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  {
> >>>  	struct kvm_regs *reset_regs;
> >>>  	const struct kvm_irq_level *cpu_vtimer_irq;
> >>> +	const struct kvm_irq_level *cpu_ptimer_irq;
> >>>  
> >>>  	switch (vcpu->arch.target) {
> >>>  	case KVM_ARM_TARGET_CORTEX_A7:
> >>> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  		reset_regs = &cortexa_regs_reset;
> >>>  		vcpu->arch.midr = read_cpuid_id();
> >>>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> >>> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
> >>>  		break;
> >>>  	default:
> >>>  		return -ENODEV;
> >>> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  	kvm_reset_coprocs(vcpu);
> >>>  
> >>>  	/* Reset arch_timer context */
> >>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> >>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >>>  }
> >>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> >>> index e95d4f6..d9e9697 100644
> >>> --- a/arch/arm64/kvm/reset.c
> >>> +++ b/arch/arm64/kvm/reset.c
> >>> @@ -46,6 +46,11 @@
> >>>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
> >>>  };
> >>>  
> >>> +static const struct kvm_irq_level default_ptimer_irq = {
> >>> +	.irq	= 30,
> >>> +	.level	= 1,
> >>> +};
> >>> +
> >>>  static const struct kvm_irq_level default_vtimer_irq = {
> >>>  	.irq	= 27,
> >>>  	.level	= 1,
> >>> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
> >>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  {
> >>>  	const struct kvm_irq_level *cpu_vtimer_irq;
> >>> +	const struct kvm_irq_level *cpu_ptimer_irq;
> >>>  	const struct kvm_regs *cpu_reset;
> >>>  
> >>>  	switch (vcpu->arch.target) {
> >>> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  		}
> >>>  
> >>>  		cpu_vtimer_irq = &default_vtimer_irq;
> >>> +		cpu_ptimer_irq = &default_ptimer_irq;
> >>>  		break;
> >>>  	}
> >>>  
> >>> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  	kvm_pmu_vcpu_reset(vcpu);
> >>>  
> >>>  	/* Reset timer */
> >>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> >>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >>>  }
> >>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> >>> index 69f648b..a364593 100644
> >>> --- a/include/kvm/arm_arch_timer.h
> >>> +++ b/include/kvm/arm_arch_timer.h
> >>> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
> >>>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_init(struct kvm *kvm);
> >>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>> -			 const struct kvm_irq_level *irq);
> >>> +			 const struct kvm_irq_level *virt_irq,
> >>> +			 const struct kvm_irq_level *phys_irq);
> >>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> >>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>> index f72005a..0f6e935 100644
> >>> --- a/virt/kvm/arm/arch_timer.c
> >>> +++ b/virt/kvm/arm/arch_timer.c
> >>> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
> >>>  }
> >>>  
> >>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>> -			 const struct kvm_irq_level *irq)
> >>> +			 const struct kvm_irq_level *virt_irq,
> >>> +			 const struct kvm_irq_level *phys_irq)
> >>>  {
> >>>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >>> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> >>>  
> >>>  	/*
> >>>  	 * The vcpu timer irq number cannot be determined in
> >>> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>>  	 * kvm_vcpu_set_target(). To handle this, we determine
> >>>  	 * vcpu timer irq number when the vcpu is reset.
> >>>  	 */
> >>> -	vtimer->irq.irq = irq->irq;
> >>> +	vtimer->irq.irq = virt_irq->irq;
> >>> +	ptimer->irq.irq = phys_irq->irq;
> >>>  
> >>>  	/*
> >>>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> >>> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>>  	 * the ARMv7 architecture.
> >>>  	 */
> >>>  	vtimer->cnt_ctl = 0;
> >>> +	ptimer->cnt_ctl = 0;
> >>>  	kvm_timer_update_state(vcpu);
> >>>  
> >>>  	return 0;
> >>> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >>>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >>>  
> >>>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> >>> +	vcpu_ptimer(vcpu)->cntvoff = 0;
> >>
> >> This is quite contentious, IMHO. Do we really want to expose the delta
> >> between the virtual and physical counters? That's a clear indication to
> >> the guest that it is virtualized. I"m not sure it matters, but I think
> >> we should at least make a conscious choice, and maybe give the
> >> opportunity to userspace to select the desired behaviour.
> >>
> > 
> > So my understanding of the architecture is that you should always have
> > two timer/counter pairs available at EL1.  They may be synchronized, and
> > they may not.  If you want an accurate reading of wall clock time, look
> > at the physical counter, and that can generally be expected to be fast,
> > precise, and syncrhonized (on working hardware, of course).
> > 
> > Now, there can be a constant or potentially monotonically increasing
> > offset between the physial and virtual counters, which may mean you're
> > running under a hypervisor or (in the first case) that firmware
> > programmed or neglected to program cntvoff.  I don't think it's an
> > inherent problem to expose that difference to a guest, and I think it's
> > more important that reading the physical counter is fast and doesn't
> > trap.
> > 
> > The question is which contract we can have with a guest OS, and which
> > legacy we have to keep supporting (Linux, UEFI, ?).
> > 
> > Probably Linux should keep relying on the virtual counter/timer only,
> > unless something is advertised in DT/ACPI, about it being able to use
> > the physical timer/counter pair, even when booted at EL1.  We could
> > explore the opportunities to build on that to let the guest figure
> > out stolen time by reading the two counters and by programming the
> > proper timer depending on the desired semantics (i.e. virtual or
> > physical time).
> > 
> > In terms of this patch, I actually think it's fine, but we may need to
> > build something more on top later.  It is possible, though, that I'm
> > entirely missing the point about Linux timekeeping infrastructure and
> > that my reading of the architecture is bogus.
> > 
> > What do you think?
> 
> I don't disagree with any of this (hopefully I was clearer in my reply
> to the cover letter).

Yeah, my long-winded reply was sort of to convince myself about my own
understanding :)


> Wventually, we'll have to support an offset-able
> physical counter to support nested virtualization, but this can come at
> a later time.
> 
Why do we need the offset-able physical counter for nested
virtualization?  I would think for nested virt we just need to support
respecting how the guest hypervisor programs CNTVOFF?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-30 19:04           ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 19:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On Mon, Jan 30, 2017 at 05:44:20PM +0000, Marc Zyngier wrote:
> On 30/01/17 14:58, Christoffer Dall wrote:
> > On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>> Initialize the emulated EL1 physical timer with the default irq number.
> >>>
> >>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>> ---
> >>>  arch/arm/kvm/reset.c         | 9 ++++++++-
> >>>  arch/arm64/kvm/reset.c       | 9 ++++++++-
> >>>  include/kvm/arm_arch_timer.h | 3 ++-
> >>>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
> >>>  4 files changed, 25 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> >>> index 4b5e802..1da8b2d 100644
> >>> --- a/arch/arm/kvm/reset.c
> >>> +++ b/arch/arm/kvm/reset.c
> >>> @@ -37,6 +37,11 @@
> >>>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
> >>>  };
> >>>  
> >>> +static const struct kvm_irq_level cortexa_ptimer_irq = {
> >>> +	{ .irq = 30 },
> >>> +	.level = 1,
> >>> +};
> >>
> >> At some point, we'll have to make that discoverable/configurable. Maybe
> >> the time for a discoverable arch timer has come (see below).
> >>
> >>> +
> >>>  static const struct kvm_irq_level cortexa_vtimer_irq = {
> >>>  	{ .irq = 27 },
> >>>  	.level = 1,
> >>> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  {
> >>>  	struct kvm_regs *reset_regs;
> >>>  	const struct kvm_irq_level *cpu_vtimer_irq;
> >>> +	const struct kvm_irq_level *cpu_ptimer_irq;
> >>>  
> >>>  	switch (vcpu->arch.target) {
> >>>  	case KVM_ARM_TARGET_CORTEX_A7:
> >>> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  		reset_regs = &cortexa_regs_reset;
> >>>  		vcpu->arch.midr = read_cpuid_id();
> >>>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> >>> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
> >>>  		break;
> >>>  	default:
> >>>  		return -ENODEV;
> >>> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  	kvm_reset_coprocs(vcpu);
> >>>  
> >>>  	/* Reset arch_timer context */
> >>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> >>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >>>  }
> >>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> >>> index e95d4f6..d9e9697 100644
> >>> --- a/arch/arm64/kvm/reset.c
> >>> +++ b/arch/arm64/kvm/reset.c
> >>> @@ -46,6 +46,11 @@
> >>>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
> >>>  };
> >>>  
> >>> +static const struct kvm_irq_level default_ptimer_irq = {
> >>> +	.irq	= 30,
> >>> +	.level	= 1,
> >>> +};
> >>> +
> >>>  static const struct kvm_irq_level default_vtimer_irq = {
> >>>  	.irq	= 27,
> >>>  	.level	= 1,
> >>> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
> >>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  {
> >>>  	const struct kvm_irq_level *cpu_vtimer_irq;
> >>> +	const struct kvm_irq_level *cpu_ptimer_irq;
> >>>  	const struct kvm_regs *cpu_reset;
> >>>  
> >>>  	switch (vcpu->arch.target) {
> >>> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  		}
> >>>  
> >>>  		cpu_vtimer_irq = &default_vtimer_irq;
> >>> +		cpu_ptimer_irq = &default_ptimer_irq;
> >>>  		break;
> >>>  	}
> >>>  
> >>> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  	kvm_pmu_vcpu_reset(vcpu);
> >>>  
> >>>  	/* Reset timer */
> >>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> >>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >>>  }
> >>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> >>> index 69f648b..a364593 100644
> >>> --- a/include/kvm/arm_arch_timer.h
> >>> +++ b/include/kvm/arm_arch_timer.h
> >>> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
> >>>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_init(struct kvm *kvm);
> >>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>> -			 const struct kvm_irq_level *irq);
> >>> +			 const struct kvm_irq_level *virt_irq,
> >>> +			 const struct kvm_irq_level *phys_irq);
> >>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> >>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>> index f72005a..0f6e935 100644
> >>> --- a/virt/kvm/arm/arch_timer.c
> >>> +++ b/virt/kvm/arm/arch_timer.c
> >>> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
> >>>  }
> >>>  
> >>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>> -			 const struct kvm_irq_level *irq)
> >>> +			 const struct kvm_irq_level *virt_irq,
> >>> +			 const struct kvm_irq_level *phys_irq)
> >>>  {
> >>>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >>> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> >>>  
> >>>  	/*
> >>>  	 * The vcpu timer irq number cannot be determined in
> >>> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>>  	 * kvm_vcpu_set_target(). To handle this, we determine
> >>>  	 * vcpu timer irq number when the vcpu is reset.
> >>>  	 */
> >>> -	vtimer->irq.irq = irq->irq;
> >>> +	vtimer->irq.irq = virt_irq->irq;
> >>> +	ptimer->irq.irq = phys_irq->irq;
> >>>  
> >>>  	/*
> >>>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> >>> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>>  	 * the ARMv7 architecture.
> >>>  	 */
> >>>  	vtimer->cnt_ctl = 0;
> >>> +	ptimer->cnt_ctl = 0;
> >>>  	kvm_timer_update_state(vcpu);
> >>>  
> >>>  	return 0;
> >>> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >>>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >>>  
> >>>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> >>> +	vcpu_ptimer(vcpu)->cntvoff = 0;
> >>
> >> This is quite contentious, IMHO. Do we really want to expose the delta
> >> between the virtual and physical counters? That's a clear indication to
> >> the guest that it is virtualized. I"m not sure it matters, but I think
> >> we should at least make a conscious choice, and maybe give the
> >> opportunity to userspace to select the desired behaviour.
> >>
> > 
> > So my understanding of the architecture is that you should always have
> > two timer/counter pairs available at EL1.  They may be synchronized, and
> > they may not.  If you want an accurate reading of wall clock time, look
> > at the physical counter, and that can generally be expected to be fast,
> > precise, and syncrhonized (on working hardware, of course).
> > 
> > Now, there can be a constant or potentially monotonically increasing
> > offset between the physial and virtual counters, which may mean you're
> > running under a hypervisor or (in the first case) that firmware
> > programmed or neglected to program cntvoff.  I don't think it's an
> > inherent problem to expose that difference to a guest, and I think it's
> > more important that reading the physical counter is fast and doesn't
> > trap.
> > 
> > The question is which contract we can have with a guest OS, and which
> > legacy we have to keep supporting (Linux, UEFI, ?).
> > 
> > Probably Linux should keep relying on the virtual counter/timer only,
> > unless something is advertised in DT/ACPI, about it being able to use
> > the physical timer/counter pair, even when booted at EL1.  We could
> > explore the opportunities to build on that to let the guest figure
> > out stolen time by reading the two counters and by programming the
> > proper timer depending on the desired semantics (i.e. virtual or
> > physical time).
> > 
> > In terms of this patch, I actually think it's fine, but we may need to
> > build something more on top later.  It is possible, though, that I'm
> > entirely missing the point about Linux timekeeping infrastructure and
> > that my reading of the architecture is bogus.
> > 
> > What do you think?
> 
> I don't disagree with any of this (hopefully I was clearer in my reply
> to the cover letter).

Yeah, my long-winded reply was sort of to convince myself about my own
understanding :)


> Wventually, we'll have to support an offset-able
> physical counter to support nested virtualization, but this can come at
> a later time.
> 
Why do we need the offset-able physical counter for nested
virtualization?  I would think for nested virt we just need to support
respecting how the guest hypervisor programs CNTVOFF?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-01-30 19:04           ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 19:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 30, 2017 at 05:44:20PM +0000, Marc Zyngier wrote:
> On 30/01/17 14:58, Christoffer Dall wrote:
> > On Sun, Jan 29, 2017 at 12:07:48PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:55 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>> Initialize the emulated EL1 physical timer with the default irq number.
> >>>
> >>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>> ---
> >>>  arch/arm/kvm/reset.c         | 9 ++++++++-
> >>>  arch/arm64/kvm/reset.c       | 9 ++++++++-
> >>>  include/kvm/arm_arch_timer.h | 3 ++-
> >>>  virt/kvm/arm/arch_timer.c    | 9 +++++++--
> >>>  4 files changed, 25 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
> >>> index 4b5e802..1da8b2d 100644
> >>> --- a/arch/arm/kvm/reset.c
> >>> +++ b/arch/arm/kvm/reset.c
> >>> @@ -37,6 +37,11 @@
> >>>  	.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
> >>>  };
> >>>  
> >>> +static const struct kvm_irq_level cortexa_ptimer_irq = {
> >>> +	{ .irq = 30 },
> >>> +	.level = 1,
> >>> +};
> >>
> >> At some point, we'll have to make that discoverable/configurable. Maybe
> >> the time for a discoverable arch timer has come (see below).
> >>
> >>> +
> >>>  static const struct kvm_irq_level cortexa_vtimer_irq = {
> >>>  	{ .irq = 27 },
> >>>  	.level = 1,
> >>> @@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  {
> >>>  	struct kvm_regs *reset_regs;
> >>>  	const struct kvm_irq_level *cpu_vtimer_irq;
> >>> +	const struct kvm_irq_level *cpu_ptimer_irq;
> >>>  
> >>>  	switch (vcpu->arch.target) {
> >>>  	case KVM_ARM_TARGET_CORTEX_A7:
> >>> @@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  		reset_regs = &cortexa_regs_reset;
> >>>  		vcpu->arch.midr = read_cpuid_id();
> >>>  		cpu_vtimer_irq = &cortexa_vtimer_irq;
> >>> +		cpu_ptimer_irq = &cortexa_ptimer_irq;
> >>>  		break;
> >>>  	default:
> >>>  		return -ENODEV;
> >>> @@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  	kvm_reset_coprocs(vcpu);
> >>>  
> >>>  	/* Reset arch_timer context */
> >>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> >>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >>>  }
> >>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> >>> index e95d4f6..d9e9697 100644
> >>> --- a/arch/arm64/kvm/reset.c
> >>> +++ b/arch/arm64/kvm/reset.c
> >>> @@ -46,6 +46,11 @@
> >>>  			COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
> >>>  };
> >>>  
> >>> +static const struct kvm_irq_level default_ptimer_irq = {
> >>> +	.irq	= 30,
> >>> +	.level	= 1,
> >>> +};
> >>> +
> >>>  static const struct kvm_irq_level default_vtimer_irq = {
> >>>  	.irq	= 27,
> >>>  	.level	= 1,
> >>> @@ -104,6 +109,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
> >>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  {
> >>>  	const struct kvm_irq_level *cpu_vtimer_irq;
> >>> +	const struct kvm_irq_level *cpu_ptimer_irq;
> >>>  	const struct kvm_regs *cpu_reset;
> >>>  
> >>>  	switch (vcpu->arch.target) {
> >>> @@ -117,6 +123,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  		}
> >>>  
> >>>  		cpu_vtimer_irq = &default_vtimer_irq;
> >>> +		cpu_ptimer_irq = &default_ptimer_irq;
> >>>  		break;
> >>>  	}
> >>>  
> >>> @@ -130,5 +137,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >>>  	kvm_pmu_vcpu_reset(vcpu);
> >>>  
> >>>  	/* Reset timer */
> >>> -	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
> >>> +	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
> >>>  }
> >>> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> >>> index 69f648b..a364593 100644
> >>> --- a/include/kvm/arm_arch_timer.h
> >>> +++ b/include/kvm/arm_arch_timer.h
> >>> @@ -59,7 +59,8 @@ struct arch_timer_cpu {
> >>>  int kvm_timer_enable(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_init(struct kvm *kvm);
> >>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>> -			 const struct kvm_irq_level *irq);
> >>> +			 const struct kvm_irq_level *virt_irq,
> >>> +			 const struct kvm_irq_level *phys_irq);
> >>>  void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
> >>>  void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
> >>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>> index f72005a..0f6e935 100644
> >>> --- a/virt/kvm/arm/arch_timer.c
> >>> +++ b/virt/kvm/arm/arch_timer.c
> >>> @@ -329,9 +329,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
> >>>  }
> >>>  
> >>>  int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>> -			 const struct kvm_irq_level *irq)
> >>> +			 const struct kvm_irq_level *virt_irq,
> >>> +			 const struct kvm_irq_level *phys_irq)
> >>>  {
> >>>  	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
> >>> +	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
> >>>  
> >>>  	/*
> >>>  	 * The vcpu timer irq number cannot be determined in
> >>> @@ -339,7 +341,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>>  	 * kvm_vcpu_set_target(). To handle this, we determine
> >>>  	 * vcpu timer irq number when the vcpu is reset.
> >>>  	 */
> >>> -	vtimer->irq.irq = irq->irq;
> >>> +	vtimer->irq.irq = virt_irq->irq;
> >>> +	ptimer->irq.irq = phys_irq->irq;
> >>>  
> >>>  	/*
> >>>  	 * The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
> >>> @@ -348,6 +351,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
> >>>  	 * the ARMv7 architecture.
> >>>  	 */
> >>>  	vtimer->cnt_ctl = 0;
> >>> +	ptimer->cnt_ctl = 0;
> >>>  	kvm_timer_update_state(vcpu);
> >>>  
> >>>  	return 0;
> >>> @@ -369,6 +373,7 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
> >>>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> >>>  
> >>>  	update_vtimer_cntvoff(vcpu, kvm_phys_timer_read());
> >>> +	vcpu_ptimer(vcpu)->cntvoff = 0;
> >>
> >> This is quite contentious, IMHO. Do we really want to expose the delta
> >> between the virtual and physical counters? That's a clear indication to
> >> the guest that it is virtualized. I"m not sure it matters, but I think
> >> we should at least make a conscious choice, and maybe give the
> >> opportunity to userspace to select the desired behaviour.
> >>
> > 
> > So my understanding of the architecture is that you should always have
> > two timer/counter pairs available at EL1.  They may be synchronized, and
> > they may not.  If you want an accurate reading of wall clock time, look
> > at the physical counter, and that can generally be expected to be fast,
> > precise, and syncrhonized (on working hardware, of course).
> > 
> > Now, there can be a constant or potentially monotonically increasing
> > offset between the physial and virtual counters, which may mean you're
> > running under a hypervisor or (in the first case) that firmware
> > programmed or neglected to program cntvoff.  I don't think it's an
> > inherent problem to expose that difference to a guest, and I think it's
> > more important that reading the physical counter is fast and doesn't
> > trap.
> > 
> > The question is which contract we can have with a guest OS, and which
> > legacy we have to keep supporting (Linux, UEFI, ?).
> > 
> > Probably Linux should keep relying on the virtual counter/timer only,
> > unless something is advertised in DT/ACPI, about it being able to use
> > the physical timer/counter pair, even when booted at EL1.  We could
> > explore the opportunities to build on that to let the guest figure
> > out stolen time by reading the two counters and by programming the
> > proper timer depending on the desired semantics (i.e. virtual or
> > physical time).
> > 
> > In terms of this patch, I actually think it's fine, but we may need to
> > build something more on top later.  It is possible, though, that I'm
> > entirely missing the point about Linux timekeeping infrastructure and
> > that my reading of the architecture is bogus.
> > 
> > What do you think?
> 
> I don't disagree with any of this (hopefully I was clearer in my reply
> to the cover letter).

Yeah, my long-winded reply was sort of to convince myself about my own
understanding :)


> Wventually, we'll have to support an offset-able
> physical counter to support nested virtualization, but this can come at
> a later time.
> 
Why do we need the offset-able physical counter for nested
virtualization?  I would think for nested virt we just need to support
respecting how the guest hypervisor programs CNTVOFF?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-30 18:48             ` Marc Zyngier
  (?)
@ 2017-01-30 19:06               ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 19:06 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
> On 30/01/17 18:41, Christoffer Dall wrote:
> > On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> >> On 30/01/17 15:02, Christoffer Dall wrote:
> >>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>>>> Now that we maintain the EL1 physical timer register states of VMs,
> >>>>> update the physical timer interrupt level along with the virtual one.
> >>>>>
> >>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>>>> timer, so we call a proper vgic function.
> >>>>>
> >>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>>> ---
> >>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>>>  1 file changed, 20 insertions(+)
> >>>>>
> >>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>> index 0f6e935..3b6bd50 100644
> >>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>  	WARN_ON(ret);
> >>>>>  }
> >>>>>  
> >>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>> +				 struct arch_timer_context *timer)
> >>>>> +{
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>>>
> >>>> Although I've added my fair share of BUG_ON() in the code base, I've
> >>>> since reconsidered my position. If we get in a situation where the vgic
> >>>> is not initialized, maybe it would be better to just WARN_ON and return
> >>>> early rather than killing the whole box. Thoughts?
> >>>>
> >>>
> >>> The distinction to me is whether this will cause fatal crashes or
> >>> exploits down the road if we're working on uninitialized data.  If all
> >>> that can happen if the vgic is not initialized, is that the guest
> >>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> >>>
> >>> Which is the case here?
> >>>
> >>> That being said, do we need this at all?  This is in the critial path
> >>> and is actually measurable (I know this from my work on the other timer
> >>> series), so it's better to get rid of it if we can.  Can we simply
> >>> convince ourselves this will never happen, and is the code ever likely
> >>> to change so that it gets called with the vgic disabled later?
> >>
> >> That'd be the best course of action. I remember us reworking some of
> >> that in the now defunct vgic-less series. Maybe we could salvage that
> >> code, if only for the time we spent on it...
> >>
> > Ah, we never merged it?  Were we waiting on a userspace implementation
> > or agreement on the ABI?
> 
> We were waiting on the userspace side to be respun against the latest
> API, and there were some comments from Peter (IIRC) about supporting
> PPIs in general (the other timers and the PMU, for example).
> 
> None of that happened, as the most vocal proponent of the series
> apparently lost interest.
> 
> > There was definitely a useful cleanup with the whole enabled flag thing
> > on the timer I remember.
> 
> Indeed. We should at least try to resurrect that bit.
> 

It's probably worth it trying to resurrect the whole thing I think,
especially since I think the implementation ended up looking quite nice.

I can add a rebase of that to my list of never-ending timer rework.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 19:06               ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 19:06 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
> On 30/01/17 18:41, Christoffer Dall wrote:
> > On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> >> On 30/01/17 15:02, Christoffer Dall wrote:
> >>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>>>> Now that we maintain the EL1 physical timer register states of VMs,
> >>>>> update the physical timer interrupt level along with the virtual one.
> >>>>>
> >>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>>>> timer, so we call a proper vgic function.
> >>>>>
> >>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>>> ---
> >>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>>>  1 file changed, 20 insertions(+)
> >>>>>
> >>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>> index 0f6e935..3b6bd50 100644
> >>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>  	WARN_ON(ret);
> >>>>>  }
> >>>>>  
> >>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>> +				 struct arch_timer_context *timer)
> >>>>> +{
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>>>
> >>>> Although I've added my fair share of BUG_ON() in the code base, I've
> >>>> since reconsidered my position. If we get in a situation where the vgic
> >>>> is not initialized, maybe it would be better to just WARN_ON and return
> >>>> early rather than killing the whole box. Thoughts?
> >>>>
> >>>
> >>> The distinction to me is whether this will cause fatal crashes or
> >>> exploits down the road if we're working on uninitialized data.  If all
> >>> that can happen if the vgic is not initialized, is that the guest
> >>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> >>>
> >>> Which is the case here?
> >>>
> >>> That being said, do we need this at all?  This is in the critial path
> >>> and is actually measurable (I know this from my work on the other timer
> >>> series), so it's better to get rid of it if we can.  Can we simply
> >>> convince ourselves this will never happen, and is the code ever likely
> >>> to change so that it gets called with the vgic disabled later?
> >>
> >> That'd be the best course of action. I remember us reworking some of
> >> that in the now defunct vgic-less series. Maybe we could salvage that
> >> code, if only for the time we spent on it...
> >>
> > Ah, we never merged it?  Were we waiting on a userspace implementation
> > or agreement on the ABI?
> 
> We were waiting on the userspace side to be respun against the latest
> API, and there were some comments from Peter (IIRC) about supporting
> PPIs in general (the other timers and the PMU, for example).
> 
> None of that happened, as the most vocal proponent of the series
> apparently lost interest.
> 
> > There was definitely a useful cleanup with the whole enabled flag thing
> > on the timer I remember.
> 
> Indeed. We should at least try to resurrect that bit.
> 

It's probably worth it trying to resurrect the whole thing I think,
especially since I think the implementation ended up looking quite nice.

I can add a rebase of that to my list of never-ending timer rework.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-30 19:06               ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-01-30 19:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
> On 30/01/17 18:41, Christoffer Dall wrote:
> > On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> >> On 30/01/17 15:02, Christoffer Dall wrote:
> >>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>>>> Now that we maintain the EL1 physical timer register states of VMs,
> >>>>> update the physical timer interrupt level along with the virtual one.
> >>>>>
> >>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>>>> timer, so we call a proper vgic function.
> >>>>>
> >>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>>> ---
> >>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>>>  1 file changed, 20 insertions(+)
> >>>>>
> >>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>> index 0f6e935..3b6bd50 100644
> >>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>  	WARN_ON(ret);
> >>>>>  }
> >>>>>  
> >>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>> +				 struct arch_timer_context *timer)
> >>>>> +{
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>>>
> >>>> Although I've added my fair share of BUG_ON() in the code base, I've
> >>>> since reconsidered my position. If we get in a situation where the vgic
> >>>> is not initialized, maybe it would be better to just WARN_ON and return
> >>>> early rather than killing the whole box. Thoughts?
> >>>>
> >>>
> >>> The distinction to me is whether this will cause fatal crashes or
> >>> exploits down the road if we're working on uninitialized data.  If all
> >>> that can happen if the vgic is not initialized, is that the guest
> >>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> >>>
> >>> Which is the case here?
> >>>
> >>> That being said, do we need this at all?  This is in the critial path
> >>> and is actually measurable (I know this from my work on the other timer
> >>> series), so it's better to get rid of it if we can.  Can we simply
> >>> convince ourselves this will never happen, and is the code ever likely
> >>> to change so that it gets called with the vgic disabled later?
> >>
> >> That'd be the best course of action. I remember us reworking some of
> >> that in the now defunct vgic-less series. Maybe we could salvage that
> >> code, if only for the time we spent on it...
> >>
> > Ah, we never merged it?  Were we waiting on a userspace implementation
> > or agreement on the ABI?
> 
> We were waiting on the userspace side to be respun against the latest
> API, and there were some comments from Peter (IIRC) about supporting
> PPIs in general (the other timers and the PMU, for example).
> 
> None of that happened, as the most vocal proponent of the series
> apparently lost interest.
> 
> > There was definitely a useful cleanup with the whole enabled flag thing
> > on the timer I remember.
> 
> Indeed. We should at least try to resurrect that bit.
> 

It's probably worth it trying to resurrect the whole thing I think,
especially since I think the implementation ended up looking quite nice.

I can add a rebase of that to my list of never-ending timer rework.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-30 19:06               ` Christoffer Dall
  (?)
@ 2017-01-31 17:00                 ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-31 17:00 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On 30/01/17 19:06, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
>> On 30/01/17 18:41, Christoffer Dall wrote:
>>> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
>>>> On 30/01/17 15:02, Christoffer Dall wrote:
>>>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>>>
>>>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>>>> timer, so we call a proper vgic function.
>>>>>>>
>>>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>>>> ---
>>>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>>>  1 file changed, 20 insertions(+)
>>>>>>>
>>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>>>> index 0f6e935..3b6bd50 100644
>>>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>>>  	WARN_ON(ret);
>>>>>>>  }
>>>>>>>  
>>>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>>> +				 struct arch_timer_context *timer)
>>>>>>> +{
>>>>>>> +	int ret;
>>>>>>> +
>>>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>>>
>>>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>>>> since reconsidered my position. If we get in a situation where the vgic
>>>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>>>> early rather than killing the whole box. Thoughts?
>>>>>>
>>>>>
>>>>> The distinction to me is whether this will cause fatal crashes or
>>>>> exploits down the road if we're working on uninitialized data.  If all
>>>>> that can happen if the vgic is not initialized, is that the guest
>>>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
>>>>>
>>>>> Which is the case here?
>>>>>
>>>>> That being said, do we need this at all?  This is in the critial path
>>>>> and is actually measurable (I know this from my work on the other timer
>>>>> series), so it's better to get rid of it if we can.  Can we simply
>>>>> convince ourselves this will never happen, and is the code ever likely
>>>>> to change so that it gets called with the vgic disabled later?
>>>>
>>>> That'd be the best course of action. I remember us reworking some of
>>>> that in the now defunct vgic-less series. Maybe we could salvage that
>>>> code, if only for the time we spent on it...
>>>>
>>> Ah, we never merged it?  Were we waiting on a userspace implementation
>>> or agreement on the ABI?
>>
>> We were waiting on the userspace side to be respun against the latest
>> API, and there were some comments from Peter (IIRC) about supporting
>> PPIs in general (the other timers and the PMU, for example).
>>
>> None of that happened, as the most vocal proponent of the series
>> apparently lost interest.
>>
>>> There was definitely a useful cleanup with the whole enabled flag thing
>>> on the timer I remember.
>>
>> Indeed. We should at least try to resurrect that bit.
>>
> 
> It's probably worth it trying to resurrect the whole thing I think,
> especially since I think the implementation ended up looking quite nice.

Indeed. My only concern is about the userspace counterpart, which hasn't
materialized when expected. Hopefully it will this time around!

> I can add a rebase of that to my list of never-ending timer rework.

We all know that you can do that while sleeping! ;-)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-31 17:00                 ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-31 17:00 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On 30/01/17 19:06, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
>> On 30/01/17 18:41, Christoffer Dall wrote:
>>> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
>>>> On 30/01/17 15:02, Christoffer Dall wrote:
>>>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>>>
>>>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>>>> timer, so we call a proper vgic function.
>>>>>>>
>>>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>>>> ---
>>>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>>>  1 file changed, 20 insertions(+)
>>>>>>>
>>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>>>> index 0f6e935..3b6bd50 100644
>>>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>>>  	WARN_ON(ret);
>>>>>>>  }
>>>>>>>  
>>>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>>> +				 struct arch_timer_context *timer)
>>>>>>> +{
>>>>>>> +	int ret;
>>>>>>> +
>>>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>>>
>>>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>>>> since reconsidered my position. If we get in a situation where the vgic
>>>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>>>> early rather than killing the whole box. Thoughts?
>>>>>>
>>>>>
>>>>> The distinction to me is whether this will cause fatal crashes or
>>>>> exploits down the road if we're working on uninitialized data.  If all
>>>>> that can happen if the vgic is not initialized, is that the guest
>>>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
>>>>>
>>>>> Which is the case here?
>>>>>
>>>>> That being said, do we need this at all?  This is in the critial path
>>>>> and is actually measurable (I know this from my work on the other timer
>>>>> series), so it's better to get rid of it if we can.  Can we simply
>>>>> convince ourselves this will never happen, and is the code ever likely
>>>>> to change so that it gets called with the vgic disabled later?
>>>>
>>>> That'd be the best course of action. I remember us reworking some of
>>>> that in the now defunct vgic-less series. Maybe we could salvage that
>>>> code, if only for the time we spent on it...
>>>>
>>> Ah, we never merged it?  Were we waiting on a userspace implementation
>>> or agreement on the ABI?
>>
>> We were waiting on the userspace side to be respun against the latest
>> API, and there were some comments from Peter (IIRC) about supporting
>> PPIs in general (the other timers and the PMU, for example).
>>
>> None of that happened, as the most vocal proponent of the series
>> apparently lost interest.
>>
>>> There was definitely a useful cleanup with the whole enabled flag thing
>>> on the timer I remember.
>>
>> Indeed. We should at least try to resurrect that bit.
>>
> 
> It's probably worth it trying to resurrect the whole thing I think,
> especially since I think the implementation ended up looking quite nice.

Indeed. My only concern is about the userspace counterpart, which hasn't
materialized when expected. Hopefully it will this time around!

> I can add a rebase of that to my list of never-ending timer rework.

We all know that you can do that while sleeping! ;-)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-01-31 17:00                 ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-01-31 17:00 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 19:06, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
>> On 30/01/17 18:41, Christoffer Dall wrote:
>>> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
>>>> On 30/01/17 15:02, Christoffer Dall wrote:
>>>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>>>
>>>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>>>> timer, so we call a proper vgic function.
>>>>>>>
>>>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>>>> ---
>>>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>>>  1 file changed, 20 insertions(+)
>>>>>>>
>>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>>>> index 0f6e935..3b6bd50 100644
>>>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>>>  	WARN_ON(ret);
>>>>>>>  }
>>>>>>>  
>>>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>>> +				 struct arch_timer_context *timer)
>>>>>>> +{
>>>>>>> +	int ret;
>>>>>>> +
>>>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>>>
>>>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>>>> since reconsidered my position. If we get in a situation where the vgic
>>>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>>>> early rather than killing the whole box. Thoughts?
>>>>>>
>>>>>
>>>>> The distinction to me is whether this will cause fatal crashes or
>>>>> exploits down the road if we're working on uninitialized data.  If all
>>>>> that can happen if the vgic is not initialized, is that the guest
>>>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
>>>>>
>>>>> Which is the case here?
>>>>>
>>>>> That being said, do we need this at all?  This is in the critial path
>>>>> and is actually measurable (I know this from my work on the other timer
>>>>> series), so it's better to get rid of it if we can.  Can we simply
>>>>> convince ourselves this will never happen, and is the code ever likely
>>>>> to change so that it gets called with the vgic disabled later?
>>>>
>>>> That'd be the best course of action. I remember us reworking some of
>>>> that in the now defunct vgic-less series. Maybe we could salvage that
>>>> code, if only for the time we spent on it...
>>>>
>>> Ah, we never merged it?  Were we waiting on a userspace implementation
>>> or agreement on the ABI?
>>
>> We were waiting on the userspace side to be respun against the latest
>> API, and there were some comments from Peter (IIRC) about supporting
>> PPIs in general (the other timers and the PMU, for example).
>>
>> None of that happened, as the most vocal proponent of the series
>> apparently lost interest.
>>
>>> There was definitely a useful cleanup with the whole enabled flag thing
>>> on the timer I remember.
>>
>> Indeed. We should at least try to resurrect that bit.
>>
> 
> It's probably worth it trying to resurrect the whole thing I think,
> especially since I think the implementation ended up looking quite nice.

Indeed. My only concern is about the userspace counterpart, which hasn't
materialized when expected. Hopefully it will this time around!

> I can add a rebase of that to my list of never-ending timer rework.

We all know that you can do that while sleeping! ;-)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-31 17:00                 ` Marc Zyngier
  (?)
@ 2017-02-01  8:02                   ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01  8:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Tue, Jan 31, 2017 at 05:00:03PM +0000, Marc Zyngier wrote:
> On 30/01/17 19:06, Christoffer Dall wrote:
> > On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
> >> On 30/01/17 18:41, Christoffer Dall wrote:
> >>> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> >>>> On 30/01/17 15:02, Christoffer Dall wrote:
> >>>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >>>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>>>>>> Now that we maintain the EL1 physical timer register states of VMs,
> >>>>>>> update the physical timer interrupt level along with the virtual one.
> >>>>>>>
> >>>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>>>>>> timer, so we call a proper vgic function.
> >>>>>>>
> >>>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>>>>> ---
> >>>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>>>>>  1 file changed, 20 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>>>> index 0f6e935..3b6bd50 100644
> >>>>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>>>  	WARN_ON(ret);
> >>>>>>>  }
> >>>>>>>  
> >>>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>>> +				 struct arch_timer_context *timer)
> >>>>>>> +{
> >>>>>>> +	int ret;
> >>>>>>> +
> >>>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>>>>>
> >>>>>> Although I've added my fair share of BUG_ON() in the code base, I've
> >>>>>> since reconsidered my position. If we get in a situation where the vgic
> >>>>>> is not initialized, maybe it would be better to just WARN_ON and return
> >>>>>> early rather than killing the whole box. Thoughts?
> >>>>>>
> >>>>>
> >>>>> The distinction to me is whether this will cause fatal crashes or
> >>>>> exploits down the road if we're working on uninitialized data.  If all
> >>>>> that can happen if the vgic is not initialized, is that the guest
> >>>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> >>>>>
> >>>>> Which is the case here?
> >>>>>
> >>>>> That being said, do we need this at all?  This is in the critial path
> >>>>> and is actually measurable (I know this from my work on the other timer
> >>>>> series), so it's better to get rid of it if we can.  Can we simply
> >>>>> convince ourselves this will never happen, and is the code ever likely
> >>>>> to change so that it gets called with the vgic disabled later?
> >>>>
> >>>> That'd be the best course of action. I remember us reworking some of
> >>>> that in the now defunct vgic-less series. Maybe we could salvage that
> >>>> code, if only for the time we spent on it...
> >>>>
> >>> Ah, we never merged it?  Were we waiting on a userspace implementation
> >>> or agreement on the ABI?
> >>
> >> We were waiting on the userspace side to be respun against the latest
> >> API, and there were some comments from Peter (IIRC) about supporting
> >> PPIs in general (the other timers and the PMU, for example).
> >>
> >> None of that happened, as the most vocal proponent of the series
> >> apparently lost interest.
> >>
> >>> There was definitely a useful cleanup with the whole enabled flag thing
> >>> on the timer I remember.
> >>
> >> Indeed. We should at least try to resurrect that bit.
> >>
> > 
> > It's probably worth it trying to resurrect the whole thing I think,
> > especially since I think the implementation ended up looking quite nice.
> 
> Indeed. My only concern is about the userspace counterpart, which hasn't
> materialized when expected. Hopefully it will this time around!
> 
> > I can add a rebase of that to my list of never-ending timer rework.
> 
> We all know that you can do that while sleeping! ;-)
> 

Haha, maybe that will finally make the code right.

-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01  8:02                   ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01  8:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On Tue, Jan 31, 2017 at 05:00:03PM +0000, Marc Zyngier wrote:
> On 30/01/17 19:06, Christoffer Dall wrote:
> > On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
> >> On 30/01/17 18:41, Christoffer Dall wrote:
> >>> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> >>>> On 30/01/17 15:02, Christoffer Dall wrote:
> >>>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >>>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>>>>>> Now that we maintain the EL1 physical timer register states of VMs,
> >>>>>>> update the physical timer interrupt level along with the virtual one.
> >>>>>>>
> >>>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>>>>>> timer, so we call a proper vgic function.
> >>>>>>>
> >>>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>>>>> ---
> >>>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>>>>>  1 file changed, 20 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>>>> index 0f6e935..3b6bd50 100644
> >>>>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>>>  	WARN_ON(ret);
> >>>>>>>  }
> >>>>>>>  
> >>>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>>> +				 struct arch_timer_context *timer)
> >>>>>>> +{
> >>>>>>> +	int ret;
> >>>>>>> +
> >>>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>>>>>
> >>>>>> Although I've added my fair share of BUG_ON() in the code base, I've
> >>>>>> since reconsidered my position. If we get in a situation where the vgic
> >>>>>> is not initialized, maybe it would be better to just WARN_ON and return
> >>>>>> early rather than killing the whole box. Thoughts?
> >>>>>>
> >>>>>
> >>>>> The distinction to me is whether this will cause fatal crashes or
> >>>>> exploits down the road if we're working on uninitialized data.  If all
> >>>>> that can happen if the vgic is not initialized, is that the guest
> >>>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> >>>>>
> >>>>> Which is the case here?
> >>>>>
> >>>>> That being said, do we need this at all?  This is in the critial path
> >>>>> and is actually measurable (I know this from my work on the other timer
> >>>>> series), so it's better to get rid of it if we can.  Can we simply
> >>>>> convince ourselves this will never happen, and is the code ever likely
> >>>>> to change so that it gets called with the vgic disabled later?
> >>>>
> >>>> That'd be the best course of action. I remember us reworking some of
> >>>> that in the now defunct vgic-less series. Maybe we could salvage that
> >>>> code, if only for the time we spent on it...
> >>>>
> >>> Ah, we never merged it?  Were we waiting on a userspace implementation
> >>> or agreement on the ABI?
> >>
> >> We were waiting on the userspace side to be respun against the latest
> >> API, and there were some comments from Peter (IIRC) about supporting
> >> PPIs in general (the other timers and the PMU, for example).
> >>
> >> None of that happened, as the most vocal proponent of the series
> >> apparently lost interest.
> >>
> >>> There was definitely a useful cleanup with the whole enabled flag thing
> >>> on the timer I remember.
> >>
> >> Indeed. We should at least try to resurrect that bit.
> >>
> > 
> > It's probably worth it trying to resurrect the whole thing I think,
> > especially since I think the implementation ended up looking quite nice.
> 
> Indeed. My only concern is about the userspace counterpart, which hasn't
> materialized when expected. Hopefully it will this time around!
> 
> > I can add a rebase of that to my list of never-ending timer rework.
> 
> We all know that you can do that while sleeping! ;-)
> 

Haha, maybe that will finally make the code right.

-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01  8:02                   ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01  8:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 31, 2017 at 05:00:03PM +0000, Marc Zyngier wrote:
> On 30/01/17 19:06, Christoffer Dall wrote:
> > On Mon, Jan 30, 2017 at 06:48:02PM +0000, Marc Zyngier wrote:
> >> On 30/01/17 18:41, Christoffer Dall wrote:
> >>> On Mon, Jan 30, 2017 at 05:50:03PM +0000, Marc Zyngier wrote:
> >>>> On 30/01/17 15:02, Christoffer Dall wrote:
> >>>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >>>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >>>>>>> Now that we maintain the EL1 physical timer register states of VMs,
> >>>>>>> update the physical timer interrupt level along with the virtual one.
> >>>>>>>
> >>>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
> >>>>>>> timer, so we call a proper vgic function.
> >>>>>>>
> >>>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>>>>> ---
> >>>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >>>>>>>  1 file changed, 20 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>>>> index 0f6e935..3b6bd50 100644
> >>>>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>>>  	WARN_ON(ret);
> >>>>>>>  }
> >>>>>>>  
> >>>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >>>>>>> +				 struct arch_timer_context *timer)
> >>>>>>> +{
> >>>>>>> +	int ret;
> >>>>>>> +
> >>>>>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
> >>>>>>
> >>>>>> Although I've added my fair share of BUG_ON() in the code base, I've
> >>>>>> since reconsidered my position. If we get in a situation where the vgic
> >>>>>> is not initialized, maybe it would be better to just WARN_ON and return
> >>>>>> early rather than killing the whole box. Thoughts?
> >>>>>>
> >>>>>
> >>>>> The distinction to me is whether this will cause fatal crashes or
> >>>>> exploits down the road if we're working on uninitialized data.  If all
> >>>>> that can happen if the vgic is not initialized, is that the guest
> >>>>> doesn't see interrupts, for example, then a WARN_ON is appropriate.
> >>>>>
> >>>>> Which is the case here?
> >>>>>
> >>>>> That being said, do we need this at all?  This is in the critial path
> >>>>> and is actually measurable (I know this from my work on the other timer
> >>>>> series), so it's better to get rid of it if we can.  Can we simply
> >>>>> convince ourselves this will never happen, and is the code ever likely
> >>>>> to change so that it gets called with the vgic disabled later?
> >>>>
> >>>> That'd be the best course of action. I remember us reworking some of
> >>>> that in the now defunct vgic-less series. Maybe we could salvage that
> >>>> code, if only for the time we spent on it...
> >>>>
> >>> Ah, we never merged it?  Were we waiting on a userspace implementation
> >>> or agreement on the ABI?
> >>
> >> We were waiting on the userspace side to be respun against the latest
> >> API, and there were some comments from Peter (IIRC) about supporting
> >> PPIs in general (the other timers and the PMU, for example).
> >>
> >> None of that happened, as the most vocal proponent of the series
> >> apparently lost interest.
> >>
> >>> There was definitely a useful cleanup with the whole enabled flag thing
> >>> on the timer I remember.
> >>
> >> Indeed. We should at least try to resurrect that bit.
> >>
> > 
> > It's probably worth it trying to resurrect the whole thing I think,
> > especially since I think the implementation ended up looking quite nice.
> 
> Indeed. My only concern is about the userspace counterpart, which hasn't
> materialized when expected. Hopefully it will this time around!
> 
> > I can add a rebase of that to my list of never-ending timer rework.
> 
> We all know that you can do that while sleeping! ;-)
> 

Haha, maybe that will finally make the code right.

-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-01-29 15:21     ` Marc Zyngier
  (?)
@ 2017-02-01  8:04       ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01  8:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Now that we maintain the EL1 physical timer register states of VMs,
> > update the physical timer interrupt level along with the virtual one.
> >
> > Note that the emulated EL1 physical timer is not mapped to any hardware
> > timer, so we call a proper vgic function.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 0f6e935..3b6bd50 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >  	WARN_ON(ret);
> >  }
> >  
> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> > +				 struct arch_timer_context *timer)
> > +{
> > +	int ret;
> > +
> > +	BUG_ON(!vgic_initialized(vcpu->kvm));
> 
> Although I've added my fair share of BUG_ON() in the code base, I've
> since reconsidered my position. If we get in a situation where the vgic
> is not initialized, maybe it would be better to just WARN_ON and return
> early rather than killing the whole box. Thoughts?
> 

Could we help this series along by saying that since this BUG_ON already
exists in the kvm_timer_update_mapped_irq function, then it just
preserves functionality and it's up to someone else (me) to remove the
BUG_ON from both functions later in life?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01  8:04       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01  8:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Now that we maintain the EL1 physical timer register states of VMs,
> > update the physical timer interrupt level along with the virtual one.
> >
> > Note that the emulated EL1 physical timer is not mapped to any hardware
> > timer, so we call a proper vgic function.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 0f6e935..3b6bd50 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >  	WARN_ON(ret);
> >  }
> >  
> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> > +				 struct arch_timer_context *timer)
> > +{
> > +	int ret;
> > +
> > +	BUG_ON(!vgic_initialized(vcpu->kvm));
> 
> Although I've added my fair share of BUG_ON() in the code base, I've
> since reconsidered my position. If we get in a situation where the vgic
> is not initialized, maybe it would be better to just WARN_ON and return
> early rather than killing the whole box. Thoughts?
> 

Could we help this series along by saying that since this BUG_ON already
exists in the kvm_timer_update_mapped_irq function, then it just
preserves functionality and it's up to someone else (me) to remove the
BUG_ON from both functions later in life?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01  8:04       ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01  8:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> > Now that we maintain the EL1 physical timer register states of VMs,
> > update the physical timer interrupt level along with the virtual one.
> >
> > Note that the emulated EL1 physical timer is not mapped to any hardware
> > timer, so we call a proper vgic function.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> > index 0f6e935..3b6bd50 100644
> > --- a/virt/kvm/arm/arch_timer.c
> > +++ b/virt/kvm/arm/arch_timer.c
> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >  	WARN_ON(ret);
> >  }
> >  
> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> > +				 struct arch_timer_context *timer)
> > +{
> > +	int ret;
> > +
> > +	BUG_ON(!vgic_initialized(vcpu->kvm));
> 
> Although I've added my fair share of BUG_ON() in the code base, I've
> since reconsidered my position. If we get in a situation where the vgic
> is not initialized, maybe it would be better to just WARN_ON and return
> early rather than killing the whole box. Thoughts?
> 

Could we help this series along by saying that since this BUG_ON already
exists in the kvm_timer_update_mapped_irq function, then it just
preserves functionality and it's up to someone else (me) to remove the
BUG_ON from both functions later in life?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-02-01  8:04       ` Christoffer Dall
  (?)
@ 2017-02-01  8:40         ` Jintack Lim
  -1 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-02-01  8:40 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Marc Zyngier, Paolo Bonzini, Radim Krčmář,
	linux, Catalin Marinas, Will Deacon, Andre Przywara, KVM General,
	arm-mail-list, kvmarm, lkml - Kernel Mailing List

On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> > Now that we maintain the EL1 physical timer register states of VMs,
>> > update the physical timer interrupt level along with the virtual one.
>> >
>> > Note that the emulated EL1 physical timer is not mapped to any hardware
>> > timer, so we call a proper vgic function.
>> >
>> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> > ---
>> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>> >  1 file changed, 20 insertions(+)
>> >
>> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> > index 0f6e935..3b6bd50 100644
>> > --- a/virt/kvm/arm/arch_timer.c
>> > +++ b/virt/kvm/arm/arch_timer.c
>> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> >     WARN_ON(ret);
>> >  }
>> >
>> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>> > +                            struct arch_timer_context *timer)
>> > +{
>> > +   int ret;
>> > +
>> > +   BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
>
> Could we help this series along by saying that since this BUG_ON already
> exists in the kvm_timer_update_mapped_irq function, then it just
> preserves functionality and it's up to someone else (me) to remove the
> BUG_ON from both functions later in life?
>

Sounds good to me :) Thanks!

> Thanks,
> -Christoffer
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01  8:40         ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-02-01  8:40 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: KVM General, Marc Zyngier, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, Andre Przywara, Paolo Bonzini,
	kvmarm, arm-mail-list

On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> > Now that we maintain the EL1 physical timer register states of VMs,
>> > update the physical timer interrupt level along with the virtual one.
>> >
>> > Note that the emulated EL1 physical timer is not mapped to any hardware
>> > timer, so we call a proper vgic function.
>> >
>> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> > ---
>> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>> >  1 file changed, 20 insertions(+)
>> >
>> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> > index 0f6e935..3b6bd50 100644
>> > --- a/virt/kvm/arm/arch_timer.c
>> > +++ b/virt/kvm/arm/arch_timer.c
>> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> >     WARN_ON(ret);
>> >  }
>> >
>> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>> > +                            struct arch_timer_context *timer)
>> > +{
>> > +   int ret;
>> > +
>> > +   BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
>
> Could we help this series along by saying that since this BUG_ON already
> exists in the kvm_timer_update_mapped_irq function, then it just
> preserves functionality and it's up to someone else (me) to remove the
> BUG_ON from both functions later in life?
>

Sounds good to me :) Thanks!

> Thanks,
> -Christoffer
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01  8:40         ` Jintack Lim
  0 siblings, 0 replies; 127+ messages in thread
From: Jintack Lim @ 2017-02-01  8:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>> > Now that we maintain the EL1 physical timer register states of VMs,
>> > update the physical timer interrupt level along with the virtual one.
>> >
>> > Note that the emulated EL1 physical timer is not mapped to any hardware
>> > timer, so we call a proper vgic function.
>> >
>> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>> > ---
>> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>> >  1 file changed, 20 insertions(+)
>> >
>> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>> > index 0f6e935..3b6bd50 100644
>> > --- a/virt/kvm/arm/arch_timer.c
>> > +++ b/virt/kvm/arm/arch_timer.c
>> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>> >     WARN_ON(ret);
>> >  }
>> >
>> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>> > +                            struct arch_timer_context *timer)
>> > +{
>> > +   int ret;
>> > +
>> > +   BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
>
> Could we help this series along by saying that since this BUG_ON already
> exists in the kvm_timer_update_mapped_irq function, then it just
> preserves functionality and it's up to someone else (me) to remove the
> BUG_ON from both functions later in life?
>

Sounds good to me :) Thanks!

> Thanks,
> -Christoffer
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-02-01  8:04       ` Christoffer Dall
@ 2017-02-01 10:01         ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:01 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel

On 01/02/17 08:04, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Now that we maintain the EL1 physical timer register states of VMs,
>>> update the physical timer interrupt level along with the virtual one.
>>>
>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>> timer, so we call a proper vgic function.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>  1 file changed, 20 insertions(+)
>>>
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 0f6e935..3b6bd50 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>  	WARN_ON(ret);
>>>  }
>>>  
>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>> +				 struct arch_timer_context *timer)
>>> +{
>>> +	int ret;
>>> +
>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
> 
> Could we help this series along by saying that since this BUG_ON already
> exists in the kvm_timer_update_mapped_irq function, then it just
> preserves functionality and it's up to someone else (me) to remove the
> BUG_ON from both functions later in life?

Works for me.

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01 10:01         ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/02/17 08:04, Christoffer Dall wrote:
> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>> Now that we maintain the EL1 physical timer register states of VMs,
>>> update the physical timer interrupt level along with the virtual one.
>>>
>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>> timer, so we call a proper vgic function.
>>>
>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>> ---
>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>  1 file changed, 20 insertions(+)
>>>
>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>> index 0f6e935..3b6bd50 100644
>>> --- a/virt/kvm/arm/arch_timer.c
>>> +++ b/virt/kvm/arm/arch_timer.c
>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>  	WARN_ON(ret);
>>>  }
>>>  
>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>> +				 struct arch_timer_context *timer)
>>> +{
>>> +	int ret;
>>> +
>>> +	BUG_ON(!vgic_initialized(vcpu->kvm));
>>
>> Although I've added my fair share of BUG_ON() in the code base, I've
>> since reconsidered my position. If we get in a situation where the vgic
>> is not initialized, maybe it would be better to just WARN_ON and return
>> early rather than killing the whole box. Thoughts?
>>
> 
> Could we help this series along by saying that since this BUG_ON already
> exists in the kvm_timer_update_mapped_irq function, then it just
> preserves functionality and it's up to someone else (me) to remove the
> BUG_ON from both functions later in life?

Works for me.

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-02-01  8:40         ` Jintack Lim
  (?)
@ 2017-02-01 10:07           ` Christoffer Dall
  -1 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01 10:07 UTC (permalink / raw)
  To: Jintack Lim
  Cc: Marc Zyngier, Paolo Bonzini, Radim Krčmář,
	linux, Catalin Marinas, Will Deacon, Andre Przywara, KVM General,
	arm-mail-list, kvmarm, lkml - Kernel Mailing List

On Wed, Feb 01, 2017 at 03:40:10AM -0500, Jintack Lim wrote:
> On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >> > Now that we maintain the EL1 physical timer register states of VMs,
> >> > update the physical timer interrupt level along with the virtual one.
> >> >
> >> > Note that the emulated EL1 physical timer is not mapped to any hardware
> >> > timer, so we call a proper vgic function.
> >> >
> >> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >> > ---
> >> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >> >  1 file changed, 20 insertions(+)
> >> >
> >> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >> > index 0f6e935..3b6bd50 100644
> >> > --- a/virt/kvm/arm/arch_timer.c
> >> > +++ b/virt/kvm/arm/arch_timer.c
> >> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >> >     WARN_ON(ret);
> >> >  }
> >> >
> >> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >> > +                            struct arch_timer_context *timer)
> >> > +{
> >> > +   int ret;
> >> > +
> >> > +   BUG_ON(!vgic_initialized(vcpu->kvm));
> >>
> >> Although I've added my fair share of BUG_ON() in the code base, I've
> >> since reconsidered my position. If we get in a situation where the vgic
> >> is not initialized, maybe it would be better to just WARN_ON and return
> >> early rather than killing the whole box. Thoughts?
> >>
> >
> > Could we help this series along by saying that since this BUG_ON already
> > exists in the kvm_timer_update_mapped_irq function, then it just
> > preserves functionality and it's up to someone else (me) to remove the
> > BUG_ON from both functions later in life?
> >
> 
> Sounds good to me :) Thanks!
> 

So just as you thought you were getting off easy...

The reason we now have kvm_timer_update_irq and
kvm_timer_update_mapped_irq is that we have the following two vgic
functions:

kvm_vgic_inject_irq
kvm_vgic_inject_mapped_irq

But the only difference between the two is what they pass
as the mapped_irq argument to vgic_update_irq_pending.

What about if we just had this as a precursor patch:

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6a084cd..91ecf48 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -175,7 +175,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 	timer->irq.level = new_level;
 	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
 				   timer->irq.level);
-	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
+
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
 					 timer->irq.irq,
 					 timer->irq.level);
 	WARN_ON(ret);
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index dea12df..4c87fd0 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -336,8 +336,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq)
 }
 
 static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
-				   unsigned int intid, bool level,
-				   bool mapped_irq)
+				   unsigned int intid, bool level)
 {
 	struct kvm_vcpu *vcpu;
 	struct vgic_irq *irq;
@@ -357,11 +356,6 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 	if (!irq)
 		return -EINVAL;
 
-	if (irq->hw != mapped_irq) {
-		vgic_put_irq(kvm, irq);
-		return -EINVAL;
-	}
-
 	spin_lock(&irq->irq_lock);
 
 	if (!vgic_validate_injection(irq, level)) {
@@ -399,13 +393,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
 			bool level)
 {
-	return vgic_update_irq_pending(kvm, cpuid, intid, level, false);
-}
-
-int kvm_vgic_inject_mapped_irq(struct kvm *kvm, int cpuid, unsigned int intid,
-			       bool level)
-{
-	return vgic_update_irq_pending(kvm, cpuid, intid, level, true);
+	return vgic_update_irq_pending(kvm, cpuid, intid, level);
 }
 
 int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq)


That would make this patch simpler.  If so, I can send out the above
patch with a proper description.

Thanks,
-Christoffer

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01 10:07           ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01 10:07 UTC (permalink / raw)
  To: Jintack Lim
  Cc: KVM General, Marc Zyngier, Catalin Marinas, Will Deacon, linux,
	lkml - Kernel Mailing List, Andre Przywara, Paolo Bonzini,
	kvmarm, arm-mail-list

On Wed, Feb 01, 2017 at 03:40:10AM -0500, Jintack Lim wrote:
> On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >> > Now that we maintain the EL1 physical timer register states of VMs,
> >> > update the physical timer interrupt level along with the virtual one.
> >> >
> >> > Note that the emulated EL1 physical timer is not mapped to any hardware
> >> > timer, so we call a proper vgic function.
> >> >
> >> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >> > ---
> >> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >> >  1 file changed, 20 insertions(+)
> >> >
> >> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >> > index 0f6e935..3b6bd50 100644
> >> > --- a/virt/kvm/arm/arch_timer.c
> >> > +++ b/virt/kvm/arm/arch_timer.c
> >> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >> >     WARN_ON(ret);
> >> >  }
> >> >
> >> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >> > +                            struct arch_timer_context *timer)
> >> > +{
> >> > +   int ret;
> >> > +
> >> > +   BUG_ON(!vgic_initialized(vcpu->kvm));
> >>
> >> Although I've added my fair share of BUG_ON() in the code base, I've
> >> since reconsidered my position. If we get in a situation where the vgic
> >> is not initialized, maybe it would be better to just WARN_ON and return
> >> early rather than killing the whole box. Thoughts?
> >>
> >
> > Could we help this series along by saying that since this BUG_ON already
> > exists in the kvm_timer_update_mapped_irq function, then it just
> > preserves functionality and it's up to someone else (me) to remove the
> > BUG_ON from both functions later in life?
> >
> 
> Sounds good to me :) Thanks!
> 

So just as you thought you were getting off easy...

The reason we now have kvm_timer_update_irq and
kvm_timer_update_mapped_irq is that we have the following two vgic
functions:

kvm_vgic_inject_irq
kvm_vgic_inject_mapped_irq

But the only difference between the two is what they pass
as the mapped_irq argument to vgic_update_irq_pending.

What about if we just had this as a precursor patch:

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6a084cd..91ecf48 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -175,7 +175,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 	timer->irq.level = new_level;
 	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
 				   timer->irq.level);
-	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
+
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
 					 timer->irq.irq,
 					 timer->irq.level);
 	WARN_ON(ret);
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index dea12df..4c87fd0 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -336,8 +336,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq)
 }
 
 static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
-				   unsigned int intid, bool level,
-				   bool mapped_irq)
+				   unsigned int intid, bool level)
 {
 	struct kvm_vcpu *vcpu;
 	struct vgic_irq *irq;
@@ -357,11 +356,6 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 	if (!irq)
 		return -EINVAL;
 
-	if (irq->hw != mapped_irq) {
-		vgic_put_irq(kvm, irq);
-		return -EINVAL;
-	}
-
 	spin_lock(&irq->irq_lock);
 
 	if (!vgic_validate_injection(irq, level)) {
@@ -399,13 +393,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
 			bool level)
 {
-	return vgic_update_irq_pending(kvm, cpuid, intid, level, false);
-}
-
-int kvm_vgic_inject_mapped_irq(struct kvm *kvm, int cpuid, unsigned int intid,
-			       bool level)
-{
-	return vgic_update_irq_pending(kvm, cpuid, intid, level, true);
+	return vgic_update_irq_pending(kvm, cpuid, intid, level);
 }
 
 int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq)


That would make this patch simpler.  If so, I can send out the above
patch with a proper description.

Thanks,
-Christoffer

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01 10:07           ` Christoffer Dall
  0 siblings, 0 replies; 127+ messages in thread
From: Christoffer Dall @ 2017-02-01 10:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 01, 2017 at 03:40:10AM -0500, Jintack Lim wrote:
> On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
> >> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
> >> > Now that we maintain the EL1 physical timer register states of VMs,
> >> > update the physical timer interrupt level along with the virtual one.
> >> >
> >> > Note that the emulated EL1 physical timer is not mapped to any hardware
> >> > timer, so we call a proper vgic function.
> >> >
> >> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >> > ---
> >> >  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
> >> >  1 file changed, 20 insertions(+)
> >> >
> >> > diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >> > index 0f6e935..3b6bd50 100644
> >> > --- a/virt/kvm/arm/arch_timer.c
> >> > +++ b/virt/kvm/arm/arch_timer.c
> >> > @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
> >> >     WARN_ON(ret);
> >> >  }
> >> >
> >> > +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> >> > +                            struct arch_timer_context *timer)
> >> > +{
> >> > +   int ret;
> >> > +
> >> > +   BUG_ON(!vgic_initialized(vcpu->kvm));
> >>
> >> Although I've added my fair share of BUG_ON() in the code base, I've
> >> since reconsidered my position. If we get in a situation where the vgic
> >> is not initialized, maybe it would be better to just WARN_ON and return
> >> early rather than killing the whole box. Thoughts?
> >>
> >
> > Could we help this series along by saying that since this BUG_ON already
> > exists in the kvm_timer_update_mapped_irq function, then it just
> > preserves functionality and it's up to someone else (me) to remove the
> > BUG_ON from both functions later in life?
> >
> 
> Sounds good to me :) Thanks!
> 

So just as you thought you were getting off easy...

The reason we now have kvm_timer_update_irq and
kvm_timer_update_mapped_irq is that we have the following two vgic
functions:

kvm_vgic_inject_irq
kvm_vgic_inject_mapped_irq

But the only difference between the two is what they pass
as the mapped_irq argument to vgic_update_irq_pending.

What about if we just had this as a precursor patch:

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6a084cd..91ecf48 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -175,7 +175,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
 	timer->irq.level = new_level;
 	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
 				   timer->irq.level);
-	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
+
+	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
 					 timer->irq.irq,
 					 timer->irq.level);
 	WARN_ON(ret);
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index dea12df..4c87fd0 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -336,8 +336,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq)
 }
 
 static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
-				   unsigned int intid, bool level,
-				   bool mapped_irq)
+				   unsigned int intid, bool level)
 {
 	struct kvm_vcpu *vcpu;
 	struct vgic_irq *irq;
@@ -357,11 +356,6 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 	if (!irq)
 		return -EINVAL;
 
-	if (irq->hw != mapped_irq) {
-		vgic_put_irq(kvm, irq);
-		return -EINVAL;
-	}
-
 	spin_lock(&irq->irq_lock);
 
 	if (!vgic_validate_injection(irq, level)) {
@@ -399,13 +393,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
 			bool level)
 {
-	return vgic_update_irq_pending(kvm, cpuid, intid, level, false);
-}
-
-int kvm_vgic_inject_mapped_irq(struct kvm *kvm, int cpuid, unsigned int intid,
-			       bool level)
-{
-	return vgic_update_irq_pending(kvm, cpuid, intid, level, true);
+	return vgic_update_irq_pending(kvm, cpuid, intid, level);
 }
 
 int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq)


That would make this patch simpler.  If so, I can send out the above
patch with a proper description.

Thanks,
-Christoffer

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
  2017-01-30 19:04           ` Christoffer Dall
  (?)
@ 2017-02-01 10:08             ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:08 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Jintack Lim, pbonzini, rkrcmar, linux, catalin.marinas,
	will.deacon, andre.przywara, kvm, linux-arm-kernel, kvmarm,
	linux-kernel, Peter Maydell

On 30/01/17 19:04, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 05:44:20PM +0000, Marc Zyngier wrote:

>> Wventually, we'll have to support an offset-able
>> physical counter to support nested virtualization, but this can come at
>> a later time.
>>
> Why do we need the offset-able physical counter for nested
> virtualization?  I would think for nested virt we just need to support
> respecting how the guest hypervisor programs CNTVOFF?

Ah, I see what you mean. Yes, once the guest hypervisor is in control of
its own CNTVOFF, we get everything we need. So let's just ignore this
for the time being, and we should be pretty good for this series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-02-01 10:08             ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:08 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: linux-kernel, kvm, catalin.marinas, will.deacon, linux, kvmarm,
	andre.przywara, pbonzini, linux-arm-kernel

On 30/01/17 19:04, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 05:44:20PM +0000, Marc Zyngier wrote:

>> Wventually, we'll have to support an offset-able
>> physical counter to support nested virtualization, but this can come at
>> a later time.
>>
> Why do we need the offset-able physical counter for nested
> virtualization?  I would think for nested virt we just need to support
> respecting how the guest hypervisor programs CNTVOFF?

Ah, I see what you mean. Yes, once the guest hypervisor is in control of
its own CNTVOFF, we get everything we need. So let's just ignore this
for the time being, and we should be pretty good for this series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer
@ 2017-02-01 10:08             ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:08 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/01/17 19:04, Christoffer Dall wrote:
> On Mon, Jan 30, 2017 at 05:44:20PM +0000, Marc Zyngier wrote:

>> Wventually, we'll have to support an offset-able
>> physical counter to support nested virtualization, but this can come at
>> a later time.
>>
> Why do we need the offset-able physical counter for nested
> virtualization?  I would think for nested virt we just need to support
> respecting how the guest hypervisor programs CNTVOFF?

Ah, I see what you mean. Yes, once the guest hypervisor is in control of
its own CNTVOFF, we get everything we need. So let's just ignore this
for the time being, and we should be pretty good for this series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
  2017-02-01 10:07           ` Christoffer Dall
  (?)
@ 2017-02-01 10:17             ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:17 UTC (permalink / raw)
  To: Christoffer Dall, Jintack Lim
  Cc: Paolo Bonzini, Radim Krčmář,
	linux, Catalin Marinas, Will Deacon, Andre Przywara, KVM General,
	arm-mail-list, kvmarm, lkml - Kernel Mailing List

On 01/02/17 10:07, Christoffer Dall wrote:
> On Wed, Feb 01, 2017 at 03:40:10AM -0500, Jintack Lim wrote:
>> On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
>> <christoffer.dall@linaro.org> wrote:
>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>
>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>> timer, so we call a proper vgic function.
>>>>>
>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>> ---
>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>  1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 0f6e935..3b6bd50 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>     WARN_ON(ret);
>>>>>  }
>>>>>
>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>> +                            struct arch_timer_context *timer)
>>>>> +{
>>>>> +   int ret;
>>>>> +
>>>>> +   BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>
>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>> since reconsidered my position. If we get in a situation where the vgic
>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>> early rather than killing the whole box. Thoughts?
>>>>
>>>
>>> Could we help this series along by saying that since this BUG_ON already
>>> exists in the kvm_timer_update_mapped_irq function, then it just
>>> preserves functionality and it's up to someone else (me) to remove the
>>> BUG_ON from both functions later in life?
>>>
>>
>> Sounds good to me :) Thanks!
>>
> 
> So just as you thought you were getting off easy...
> 
> The reason we now have kvm_timer_update_irq and
> kvm_timer_update_mapped_irq is that we have the following two vgic
> functions:
> 
> kvm_vgic_inject_irq
> kvm_vgic_inject_mapped_irq
> 
> But the only difference between the two is what they pass
> as the mapped_irq argument to vgic_update_irq_pending.
> 
> What about if we just had this as a precursor patch:
> 
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 6a084cd..91ecf48 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -175,7 +175,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>  	timer->irq.level = new_level;
>  	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
>  				   timer->irq.level);
> -	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> +
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>  					 timer->irq.irq,
>  					 timer->irq.level);
>  	WARN_ON(ret);
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index dea12df..4c87fd0 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -336,8 +336,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq)
>  }
>  
>  static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
> -				   unsigned int intid, bool level,
> -				   bool mapped_irq)
> +				   unsigned int intid, bool level)
>  {
>  	struct kvm_vcpu *vcpu;
>  	struct vgic_irq *irq;
> @@ -357,11 +356,6 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  	if (!irq)
>  		return -EINVAL;
>  
> -	if (irq->hw != mapped_irq) {
> -		vgic_put_irq(kvm, irq);
> -		return -EINVAL;
> -	}
> -
>  	spin_lock(&irq->irq_lock);
>  
>  	if (!vgic_validate_injection(irq, level)) {
> @@ -399,13 +393,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
>  			bool level)
>  {
> -	return vgic_update_irq_pending(kvm, cpuid, intid, level, false);
> -}
> -
> -int kvm_vgic_inject_mapped_irq(struct kvm *kvm, int cpuid, unsigned int intid,
> -			       bool level)
> -{
> -	return vgic_update_irq_pending(kvm, cpuid, intid, level, true);
> +	return vgic_update_irq_pending(kvm, cpuid, intid, level);
>  }
>  
>  int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq)
> 
> 
> That would make this patch simpler.  If so, I can send out the above
> patch with a proper description.

Indeed. And while you're at it, rename vgic_update_irq_pending to
kvm_vgic_inject_irq, as I don't think we need this simple stub?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01 10:17             ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:17 UTC (permalink / raw)
  To: Christoffer Dall, Jintack Lim
  Cc: KVM General, Andre Przywara, Will Deacon, linux,
	lkml - Kernel Mailing List, Catalin Marinas, Paolo Bonzini,
	kvmarm, arm-mail-list

On 01/02/17 10:07, Christoffer Dall wrote:
> On Wed, Feb 01, 2017 at 03:40:10AM -0500, Jintack Lim wrote:
>> On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
>> <christoffer.dall@linaro.org> wrote:
>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>
>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>> timer, so we call a proper vgic function.
>>>>>
>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>> ---
>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>  1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 0f6e935..3b6bd50 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>     WARN_ON(ret);
>>>>>  }
>>>>>
>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>> +                            struct arch_timer_context *timer)
>>>>> +{
>>>>> +   int ret;
>>>>> +
>>>>> +   BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>
>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>> since reconsidered my position. If we get in a situation where the vgic
>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>> early rather than killing the whole box. Thoughts?
>>>>
>>>
>>> Could we help this series along by saying that since this BUG_ON already
>>> exists in the kvm_timer_update_mapped_irq function, then it just
>>> preserves functionality and it's up to someone else (me) to remove the
>>> BUG_ON from both functions later in life?
>>>
>>
>> Sounds good to me :) Thanks!
>>
> 
> So just as you thought you were getting off easy...
> 
> The reason we now have kvm_timer_update_irq and
> kvm_timer_update_mapped_irq is that we have the following two vgic
> functions:
> 
> kvm_vgic_inject_irq
> kvm_vgic_inject_mapped_irq
> 
> But the only difference between the two is what they pass
> as the mapped_irq argument to vgic_update_irq_pending.
> 
> What about if we just had this as a precursor patch:
> 
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 6a084cd..91ecf48 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -175,7 +175,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>  	timer->irq.level = new_level;
>  	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
>  				   timer->irq.level);
> -	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> +
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>  					 timer->irq.irq,
>  					 timer->irq.level);
>  	WARN_ON(ret);
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index dea12df..4c87fd0 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -336,8 +336,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq)
>  }
>  
>  static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
> -				   unsigned int intid, bool level,
> -				   bool mapped_irq)
> +				   unsigned int intid, bool level)
>  {
>  	struct kvm_vcpu *vcpu;
>  	struct vgic_irq *irq;
> @@ -357,11 +356,6 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  	if (!irq)
>  		return -EINVAL;
>  
> -	if (irq->hw != mapped_irq) {
> -		vgic_put_irq(kvm, irq);
> -		return -EINVAL;
> -	}
> -
>  	spin_lock(&irq->irq_lock);
>  
>  	if (!vgic_validate_injection(irq, level)) {
> @@ -399,13 +393,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
>  			bool level)
>  {
> -	return vgic_update_irq_pending(kvm, cpuid, intid, level, false);
> -}
> -
> -int kvm_vgic_inject_mapped_irq(struct kvm *kvm, int cpuid, unsigned int intid,
> -			       bool level)
> -{
> -	return vgic_update_irq_pending(kvm, cpuid, intid, level, true);
> +	return vgic_update_irq_pending(kvm, cpuid, intid, level);
>  }
>  
>  int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq)
> 
> 
> That would make this patch simpler.  If so, I can send out the above
> patch with a proper description.

Indeed. And while you're at it, rename vgic_update_irq_pending to
kvm_vgic_inject_irq, as I don't think we need this simple stub?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level
@ 2017-02-01 10:17             ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2017-02-01 10:17 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/02/17 10:07, Christoffer Dall wrote:
> On Wed, Feb 01, 2017 at 03:40:10AM -0500, Jintack Lim wrote:
>> On Wed, Feb 1, 2017 at 3:04 AM, Christoffer Dall
>> <christoffer.dall@linaro.org> wrote:
>>> On Sun, Jan 29, 2017 at 03:21:06PM +0000, Marc Zyngier wrote:
>>>> On Fri, Jan 27 2017 at 01:04:56 AM, Jintack Lim <jintack@cs.columbia.edu> wrote:
>>>>> Now that we maintain the EL1 physical timer register states of VMs,
>>>>> update the physical timer interrupt level along with the virtual one.
>>>>>
>>>>> Note that the emulated EL1 physical timer is not mapped to any hardware
>>>>> timer, so we call a proper vgic function.
>>>>>
>>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>>> ---
>>>>>  virt/kvm/arm/arch_timer.c | 20 ++++++++++++++++++++
>>>>>  1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 0f6e935..3b6bd50 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -180,6 +180,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>>     WARN_ON(ret);
>>>>>  }
>>>>>
>>>>> +static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>>>>> +                            struct arch_timer_context *timer)
>>>>> +{
>>>>> +   int ret;
>>>>> +
>>>>> +   BUG_ON(!vgic_initialized(vcpu->kvm));
>>>>
>>>> Although I've added my fair share of BUG_ON() in the code base, I've
>>>> since reconsidered my position. If we get in a situation where the vgic
>>>> is not initialized, maybe it would be better to just WARN_ON and return
>>>> early rather than killing the whole box. Thoughts?
>>>>
>>>
>>> Could we help this series along by saying that since this BUG_ON already
>>> exists in the kvm_timer_update_mapped_irq function, then it just
>>> preserves functionality and it's up to someone else (me) to remove the
>>> BUG_ON from both functions later in life?
>>>
>>
>> Sounds good to me :) Thanks!
>>
> 
> So just as you thought you were getting off easy...
> 
> The reason we now have kvm_timer_update_irq and
> kvm_timer_update_mapped_irq is that we have the following two vgic
> functions:
> 
> kvm_vgic_inject_irq
> kvm_vgic_inject_mapped_irq
> 
> But the only difference between the two is what they pass
> as the mapped_irq argument to vgic_update_irq_pending.
> 
> What about if we just had this as a precursor patch:
> 
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index 6a084cd..91ecf48 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -175,7 +175,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
>  	timer->irq.level = new_level;
>  	trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
>  				   timer->irq.level);
> -	ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
> +
> +	ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>  					 timer->irq.irq,
>  					 timer->irq.level);
>  	WARN_ON(ret);
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index dea12df..4c87fd0 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -336,8 +336,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq)
>  }
>  
>  static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
> -				   unsigned int intid, bool level,
> -				   bool mapped_irq)
> +				   unsigned int intid, bool level)
>  {
>  	struct kvm_vcpu *vcpu;
>  	struct vgic_irq *irq;
> @@ -357,11 +356,6 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  	if (!irq)
>  		return -EINVAL;
>  
> -	if (irq->hw != mapped_irq) {
> -		vgic_put_irq(kvm, irq);
> -		return -EINVAL;
> -	}
> -
>  	spin_lock(&irq->irq_lock);
>  
>  	if (!vgic_validate_injection(irq, level)) {
> @@ -399,13 +393,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
>  			bool level)
>  {
> -	return vgic_update_irq_pending(kvm, cpuid, intid, level, false);
> -}
> -
> -int kvm_vgic_inject_mapped_irq(struct kvm *kvm, int cpuid, unsigned int intid,
> -			       bool level)
> -{
> -	return vgic_update_irq_pending(kvm, cpuid, intid, level, true);
> +	return vgic_update_irq_pending(kvm, cpuid, intid, level);
>  }
>  
>  int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq)
> 
> 
> That would make this patch simpler.  If so, I can send out the above
> patch with a proper description.

Indeed. And while you're at it, rename vgic_update_irq_pending to
kvm_vgic_inject_irq, as I don't think we need this simple stub?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

end of thread, other threads:[~2017-02-01 10:17 UTC | newest]

Thread overview: 127+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-27  1:04 [RFC v2 00/10] Provide the EL1 physical timer to the VM Jintack Lim
2017-01-27  1:04 ` Jintack Lim
2017-01-27  1:04 ` [RFC v2 01/10] KVM: arm/arm64: Abstract virtual timer context into separate structure Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-29 11:44   ` Marc Zyngier
2017-01-29 11:44     ` Marc Zyngier
2017-01-29 11:44     ` Marc Zyngier
2017-01-27  1:04 ` [RFC v2 02/10] KVM: arm/arm64: Move cntvoff to each timer context Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-29 11:54   ` Marc Zyngier
2017-01-29 11:54     ` Marc Zyngier
2017-01-29 11:54     ` Marc Zyngier
2017-01-30 14:45     ` Christoffer Dall
2017-01-30 14:45       ` Christoffer Dall
2017-01-30 14:45       ` Christoffer Dall
2017-01-30 14:51       ` Marc Zyngier
2017-01-30 14:51         ` Marc Zyngier
2017-01-30 14:51         ` Marc Zyngier
2017-01-30 17:40         ` Jintack Lim
2017-01-30 17:40           ` Jintack Lim
2017-01-30 17:40           ` Jintack Lim
2017-01-30 17:58     ` Jintack Lim
2017-01-30 17:58       ` Jintack Lim
2017-01-30 17:58       ` Jintack Lim
2017-01-30 18:05       ` Marc Zyngier
2017-01-30 18:05         ` Marc Zyngier
2017-01-30 18:05         ` Marc Zyngier
2017-01-30 18:45         ` Jintack Lim
2017-01-30 18:45           ` Jintack Lim
2017-01-27  1:04 ` [RFC v2 03/10] KVM: arm/arm64: Decouple kvm timer functions from virtual timer Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-29 12:01   ` Marc Zyngier
2017-01-29 12:01     ` Marc Zyngier
2017-01-29 12:01     ` Marc Zyngier
2017-01-30 17:17     ` Jintack Lim
2017-01-30 17:17       ` Jintack Lim
2017-01-30 14:49   ` Christoffer Dall
2017-01-30 14:49     ` Christoffer Dall
2017-01-30 14:49     ` Christoffer Dall
2017-01-30 17:18     ` Jintack Lim
2017-01-30 17:18       ` Jintack Lim
2017-01-30 17:18       ` Jintack Lim
2017-01-27  1:04 ` [RFC v2 04/10] KVM: arm/arm64: Add the EL1 physical timer context Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-27  1:04 ` [RFC v2 05/10] KVM: arm/arm64: Initialize the emulated EL1 physical timer Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-29 12:07   ` Marc Zyngier
2017-01-29 12:07     ` Marc Zyngier
2017-01-29 12:07     ` Marc Zyngier
2017-01-30 14:58     ` Christoffer Dall
2017-01-30 14:58       ` Christoffer Dall
2017-01-30 14:58       ` Christoffer Dall
2017-01-30 17:44       ` Marc Zyngier
2017-01-30 17:44         ` Marc Zyngier
2017-01-30 19:04         ` Christoffer Dall
2017-01-30 19:04           ` Christoffer Dall
2017-01-30 19:04           ` Christoffer Dall
2017-02-01 10:08           ` Marc Zyngier
2017-02-01 10:08             ` Marc Zyngier
2017-02-01 10:08             ` Marc Zyngier
2017-01-27  1:04 ` [RFC v2 06/10] KVM: arm/arm64: Update the physical timer interrupt level Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-29 15:21   ` Marc Zyngier
2017-01-29 15:21     ` Marc Zyngier
2017-01-29 15:21     ` Marc Zyngier
2017-01-30 15:02     ` Christoffer Dall
2017-01-30 15:02       ` Christoffer Dall
2017-01-30 17:50       ` Marc Zyngier
2017-01-30 17:50         ` Marc Zyngier
2017-01-30 17:50         ` Marc Zyngier
2017-01-30 18:41         ` Christoffer Dall
2017-01-30 18:41           ` Christoffer Dall
2017-01-30 18:48           ` Marc Zyngier
2017-01-30 18:48             ` Marc Zyngier
2017-01-30 18:48             ` Marc Zyngier
2017-01-30 19:06             ` Christoffer Dall
2017-01-30 19:06               ` Christoffer Dall
2017-01-30 19:06               ` Christoffer Dall
2017-01-31 17:00               ` Marc Zyngier
2017-01-31 17:00                 ` Marc Zyngier
2017-01-31 17:00                 ` Marc Zyngier
2017-02-01  8:02                 ` Christoffer Dall
2017-02-01  8:02                   ` Christoffer Dall
2017-02-01  8:02                   ` Christoffer Dall
2017-02-01  8:04     ` Christoffer Dall
2017-02-01  8:04       ` Christoffer Dall
2017-02-01  8:04       ` Christoffer Dall
2017-02-01  8:40       ` Jintack Lim
2017-02-01  8:40         ` Jintack Lim
2017-02-01  8:40         ` Jintack Lim
2017-02-01 10:07         ` Christoffer Dall
2017-02-01 10:07           ` Christoffer Dall
2017-02-01 10:07           ` Christoffer Dall
2017-02-01 10:17           ` Marc Zyngier
2017-02-01 10:17             ` Marc Zyngier
2017-02-01 10:17             ` Marc Zyngier
2017-02-01 10:01       ` Marc Zyngier
2017-02-01 10:01         ` Marc Zyngier
2017-01-27  1:04 ` [RFC v2 07/10] KVM: arm/arm64: Set a background timer to the earliest timer expiration Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-27  1:04 ` [RFC v2 08/10] KVM: arm/arm64: Set up a background timer for the physical timer emulation Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-27  1:04 ` [RFC v2 09/10] KVM: arm64: Add the EL1 physical timer access handler Jintack Lim
2017-01-27  1:04   ` Jintack Lim
2017-01-27  1:05 ` [RFC v2 10/10] KVM: arm/arm64: Emulate the EL1 phys timer register access Jintack Lim
2017-01-27  1:05   ` Jintack Lim
2017-01-29 15:44   ` Marc Zyngier
2017-01-29 15:44     ` Marc Zyngier
2017-01-29 15:44     ` Marc Zyngier
2017-01-30 17:08     ` Jintack Lim
2017-01-30 17:08       ` Jintack Lim
2017-01-30 17:08       ` Jintack Lim
2017-01-30 17:26       ` Peter Maydell
2017-01-30 17:26         ` Peter Maydell
2017-01-30 17:26         ` Peter Maydell
2017-01-30 17:35         ` Marc Zyngier
2017-01-30 17:35           ` Marc Zyngier
2017-01-30 17:35           ` Marc Zyngier
2017-01-30 17:38         ` Jintack Lim
2017-01-30 17:38           ` Jintack Lim
2017-01-30 17:38           ` Jintack Lim
2017-01-29 15:55 ` [RFC v2 00/10] Provide the EL1 physical timer to the VM Marc Zyngier
2017-01-29 15:55   ` Marc Zyngier
2017-01-29 15:55   ` Marc Zyngier
2017-01-30 19:02   ` Jintack Lim
2017-01-30 19:02     ` Jintack Lim
2017-01-30 19:02     ` Jintack Lim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.