* [PATCH v2 0/4] kvm/ppc: interrupt disabling fixes
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt; +Cc: kvm-ppc, kvm, linuxppc-dev
v2:
- Split into separate changes
- Rebase on top of (and fix a bug in) powerpc: Make hard_irq_disable()
do the right thing vs. irq tracing
- Move interrupt diasbling/enabling into kvmppc_handle_exit
Based on Ben's next branch.
Scott Wood (4):
powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
kvm/ppc: Call trace_hardirqs_on before entry
kvm/ppc: IRQ disabling cleanup
arch/powerpc/include/asm/hw_irq.h | 5 +++--
arch/powerpc/include/asm/kvm_ppc.h | 17 ++++++++++++++---
arch/powerpc/kvm/book3s_pr.c | 16 +++++-----------
arch/powerpc/kvm/booke.c | 18 +++++++++---------
arch/powerpc/kvm/powerpc.c | 23 ++++++++---------------
5 files changed, 39 insertions(+), 40 deletions(-)
--
1.7.10.4
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 0/4] kvm/ppc: interrupt disabling fixes
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt; +Cc: linuxppc-dev, kvm, kvm-ppc
v2:
- Split into separate changes
- Rebase on top of (and fix a bug in) powerpc: Make hard_irq_disable()
do the right thing vs. irq tracing
- Move interrupt diasbling/enabling into kvmppc_handle_exit
Based on Ben's next branch.
Scott Wood (4):
powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
kvm/ppc: Call trace_hardirqs_on before entry
kvm/ppc: IRQ disabling cleanup
arch/powerpc/include/asm/hw_irq.h | 5 +++--
arch/powerpc/include/asm/kvm_ppc.h | 17 ++++++++++++++---
arch/powerpc/kvm/book3s_pr.c | 16 +++++-----------
arch/powerpc/kvm/booke.c | 18 +++++++++---------
arch/powerpc/kvm/powerpc.c | 23 ++++++++---------------
5 files changed, 39 insertions(+), 40 deletions(-)
--
1.7.10.4
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 0/4] kvm/ppc: interrupt disabling fixes
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt; +Cc: kvm-ppc, kvm, linuxppc-dev
v2:
- Split into separate changes
- Rebase on top of (and fix a bug in) powerpc: Make hard_irq_disable()
do the right thing vs. irq tracing
- Move interrupt diasbling/enabling into kvmppc_handle_exit
Based on Ben's next branch.
Scott Wood (4):
powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
kvm/ppc: Call trace_hardirqs_on before entry
kvm/ppc: IRQ disabling cleanup
arch/powerpc/include/asm/hw_irq.h | 5 +++--
arch/powerpc/include/asm/kvm_ppc.h | 17 ++++++++++++++---
arch/powerpc/kvm/book3s_pr.c | 16 +++++-----------
arch/powerpc/kvm/booke.c | 18 +++++++++---------
arch/powerpc/kvm/powerpc.c | 23 ++++++++---------------
5 files changed, 39 insertions(+), 40 deletions(-)
--
1.7.10.4
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
2013-05-10 3:09 ` Scott Wood
(?)
@ 2013-05-10 3:09 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
lockdep.c has this:
/*
* So we're supposed to get called after you mask local IRQs,
* but for some reason the hardware doesn't quite think you did
* a proper job.
*/
if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
return;
Since irqs_disabled() is based on soft_enabled(), that (not just the
hard EE bit) needs to be 0 before we call trace_hardirqs_off.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/hw_irq.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index d615b28..ba713f1 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -96,11 +96,12 @@ static inline bool arch_irqs_disabled(void)
#endif
#define hard_irq_disable() do { \
+ u8 _was_enabled = get_paca()->soft_enabled; \
__hard_irq_disable(); \
- if (local_paca->soft_enabled) \
- trace_hardirqs_off(); \
get_paca()->soft_enabled = 0; \
get_paca()->irq_happened |= PACA_IRQ_HARD_DIS; \
+ if (_was_enabled) \
+ trace_hardirqs_off(); \
} while(0)
static inline bool lazy_irq_pending(void)
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: Scott Wood, linuxppc-dev, kvm, kvm-ppc
lockdep.c has this:
/*
* So we're supposed to get called after you mask local IRQs,
* but for some reason the hardware doesn't quite think you did
* a proper job.
*/
if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
return;
Since irqs_disabled() is based on soft_enabled(), that (not just the
hard EE bit) needs to be 0 before we call trace_hardirqs_off.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/hw_irq.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index d615b28..ba713f1 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -96,11 +96,12 @@ static inline bool arch_irqs_disabled(void)
#endif
#define hard_irq_disable() do { \
+ u8 _was_enabled = get_paca()->soft_enabled; \
__hard_irq_disable(); \
- if (local_paca->soft_enabled) \
- trace_hardirqs_off(); \
get_paca()->soft_enabled = 0; \
get_paca()->irq_happened |= PACA_IRQ_HARD_DIS; \
+ if (_was_enabled) \
+ trace_hardirqs_off(); \
} while(0)
static inline bool lazy_irq_pending(void)
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
lockdep.c has this:
/*
* So we're supposed to get called after you mask local IRQs,
* but for some reason the hardware doesn't quite think you did
* a proper job.
*/
if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
return;
Since irqs_disabled() is based on soft_enabled(), that (not just the
hard EE bit) needs to be 0 before we call trace_hardirqs_off.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/hw_irq.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index d615b28..ba713f1 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -96,11 +96,12 @@ static inline bool arch_irqs_disabled(void)
#endif
#define hard_irq_disable() do { \
+ u8 _was_enabled = get_paca()->soft_enabled; \
__hard_irq_disable(); \
- if (local_paca->soft_enabled) \
- trace_hardirqs_off(); \
get_paca()->soft_enabled = 0; \
get_paca()->irq_happened |= PACA_IRQ_HARD_DIS; \
+ if (_was_enabled) \
+ trace_hardirqs_off(); \
} while(0)
static inline bool lazy_irq_pending(void)
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
2013-05-10 3:09 ` Scott Wood
(?)
@ 2013-05-10 3:09 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
EE is hard-disabled on entry to kvmppc_handle_exit(), so call
hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and soft_enabled
is unset.
Without this, we get warnings such as arch/powerpc/kernel/time.c:300,
and sometimes host kernel hangs.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/kvm/booke.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 1020119..705fc5c 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
int r = RESUME_HOST;
int s;
+#ifdef CONFIG_PPC64
+ WARN_ON(local_paca->irq_happened != 0);
+#endif
+ hard_irq_disable();
+
/* update before a new last_exit_type is rewritten */
kvmppc_update_timing_stats(vcpu);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: Scott Wood, linuxppc-dev, kvm, kvm-ppc
EE is hard-disabled on entry to kvmppc_handle_exit(), so call
hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and soft_enabled
is unset.
Without this, we get warnings such as arch/powerpc/kernel/time.c:300,
and sometimes host kernel hangs.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/kvm/booke.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 1020119..705fc5c 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
int r = RESUME_HOST;
int s;
+#ifdef CONFIG_PPC64
+ WARN_ON(local_paca->irq_happened != 0);
+#endif
+ hard_irq_disable();
+
/* update before a new last_exit_type is rewritten */
kvmppc_update_timing_stats(vcpu);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
EE is hard-disabled on entry to kvmppc_handle_exit(), so call
hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and soft_enabled
is unset.
Without this, we get warnings such as arch/powerpc/kernel/time.c:300,
and sometimes host kernel hangs.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/kvm/booke.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 1020119..705fc5c 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
int r = RESUME_HOST;
int s;
+#ifdef CONFIG_PPC64
+ WARN_ON(local_paca->irq_happened != 0);
+#endif
+ hard_irq_disable();
+
/* update before a new last_exit_type is rewritten */
kvmppc_update_timing_stats(vcpu);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
2013-05-10 3:09 ` Scott Wood
(?)
@ 2013-05-10 3:09 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
Currently this is only being done on 64-bit. Rather than just move it
out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
consistent with lazy ee state, and so that we don't track more host
code as interrupts-enabled than necessary.
Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
that this function now has a role on 32-bit as well.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
arch/powerpc/kvm/book3s_pr.c | 4 ++--
arch/powerpc/kvm/booke.c | 4 ++--
arch/powerpc/kvm/powerpc.c | 2 --
4 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index a5287fe..6885846 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
}
}
-/* Please call after prepare_to_enter. This function puts the lazy ee state
- back to normal mode, without actually enabling interrupts. */
-static inline void kvmppc_lazy_ee_enable(void)
+/*
+ * Please call after prepare_to_enter. This function puts the lazy ee and irq
+ * disabled tracking state back to normal mode, without actually enabling
+ * interrupts.
+ */
+static inline void kvmppc_fix_ee_before_entry(void)
{
+ trace_hardirqs_on();
+
#ifdef CONFIG_PPC64
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index bdc40b8..0b97ce4 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -890,7 +890,7 @@ program_interrupt:
local_irq_enable();
r = s;
} else {
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
}
}
@@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
if (vcpu->arch.shared->msr & MSR_FP)
kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
ret = __kvmppc_vcpu_run(kvm_run, vcpu);
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 705fc5c..eb89b83 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
ret = s;
goto out;
}
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
kvm_guest_enter();
@@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
local_irq_enable();
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
} else {
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
}
}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6316ee3..4e05f8c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
kvm_guest_exit();
continue;
}
-
- trace_hardirqs_on();
#endif
kvm_guest_enter();
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: Scott Wood, linuxppc-dev, kvm, kvm-ppc
Currently this is only being done on 64-bit. Rather than just move it
out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
consistent with lazy ee state, and so that we don't track more host
code as interrupts-enabled than necessary.
Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
that this function now has a role on 32-bit as well.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
arch/powerpc/kvm/book3s_pr.c | 4 ++--
arch/powerpc/kvm/booke.c | 4 ++--
arch/powerpc/kvm/powerpc.c | 2 --
4 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index a5287fe..6885846 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
}
}
-/* Please call after prepare_to_enter. This function puts the lazy ee state
- back to normal mode, without actually enabling interrupts. */
-static inline void kvmppc_lazy_ee_enable(void)
+/*
+ * Please call after prepare_to_enter. This function puts the lazy ee and irq
+ * disabled tracking state back to normal mode, without actually enabling
+ * interrupts.
+ */
+static inline void kvmppc_fix_ee_before_entry(void)
{
+ trace_hardirqs_on();
+
#ifdef CONFIG_PPC64
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index bdc40b8..0b97ce4 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -890,7 +890,7 @@ program_interrupt:
local_irq_enable();
r = s;
} else {
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
}
}
@@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
if (vcpu->arch.shared->msr & MSR_FP)
kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
ret = __kvmppc_vcpu_run(kvm_run, vcpu);
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 705fc5c..eb89b83 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
ret = s;
goto out;
}
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
kvm_guest_enter();
@@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
local_irq_enable();
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
} else {
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
}
}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6316ee3..4e05f8c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
kvm_guest_exit();
continue;
}
-
- trace_hardirqs_on();
#endif
kvm_guest_enter();
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
Currently this is only being done on 64-bit. Rather than just move it
out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
consistent with lazy ee state, and so that we don't track more host
code as interrupts-enabled than necessary.
Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
that this function now has a role on 32-bit as well.
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
arch/powerpc/kvm/book3s_pr.c | 4 ++--
arch/powerpc/kvm/booke.c | 4 ++--
arch/powerpc/kvm/powerpc.c | 2 --
4 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index a5287fe..6885846 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
}
}
-/* Please call after prepare_to_enter. This function puts the lazy ee state
- back to normal mode, without actually enabling interrupts. */
-static inline void kvmppc_lazy_ee_enable(void)
+/*
+ * Please call after prepare_to_enter. This function puts the lazy ee and irq
+ * disabled tracking state back to normal mode, without actually enabling
+ * interrupts.
+ */
+static inline void kvmppc_fix_ee_before_entry(void)
{
+ trace_hardirqs_on();
+
#ifdef CONFIG_PPC64
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index bdc40b8..0b97ce4 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -890,7 +890,7 @@ program_interrupt:
local_irq_enable();
r = s;
} else {
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
}
}
@@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
if (vcpu->arch.shared->msr & MSR_FP)
kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
ret = __kvmppc_vcpu_run(kvm_run, vcpu);
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 705fc5c..eb89b83 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
ret = s;
goto out;
}
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
kvm_guest_enter();
@@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
local_irq_enable();
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
} else {
- kvmppc_lazy_ee_enable();
+ kvmppc_fix_ee_before_entry();
}
}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6316ee3..4e05f8c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
kvm_guest_exit();
continue;
}
-
- trace_hardirqs_on();
#endif
kvm_guest_enter();
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
2013-05-10 3:09 ` Scott Wood
(?)
@ 2013-05-10 3:09 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
Simplify the handling of lazy EE by going directly from fully-enabled
to hard-disabled. This replaces the lazy_irq_pending() check
(including its misplaced kvm_guest_exit() call).
As suggested by Tiejun Chen, move the interrupt disabling into
kvmppc_prepare_to_enter() rather than have each caller do it. Also
move the IRQ enabling on heavyweight exit into
kvmppc_prepare_to_enter().
Don't move kvmppc_fix_ee_before_entry() into kvmppc_prepare_to_enter(),
so that the caller can avoid marking interrupts enabled earlier than
necessary (e.g. book3s_pr waits until after FP save/restore is done).
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 6 ++++++
arch/powerpc/kvm/book3s_pr.c | 12 +++---------
arch/powerpc/kvm/booke.c | 9 ++-------
arch/powerpc/kvm/powerpc.c | 21 ++++++++-------------
4 files changed, 19 insertions(+), 29 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 6885846..e4474f8 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -404,6 +404,12 @@ static inline void kvmppc_fix_ee_before_entry(void)
trace_hardirqs_on();
#ifdef CONFIG_PPC64
+ /*
+ * To avoid races, the caller must have gone directly from having
+ * interrupts fully-enabled to hard-disabled.
+ */
+ WARN_ON(local_paca->irq_happened != PACA_IRQ_HARD_DIS);
+
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
local_paca->soft_enabled = 1;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 0b97ce4..e61e39e 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -884,14 +884,11 @@ program_interrupt:
* and if we really did time things so badly, then we just exit
* again due to a host external interrupt.
*/
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0) {
- local_irq_enable();
+ if (s <= 0)
r = s;
- } else {
+ else
kvmppc_fix_ee_before_entry();
- }
}
trace_kvm_book3s_reenter(r, vcpu);
@@ -1121,12 +1118,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
* really did time things so badly, then we just exit again due to
* a host external interrupt.
*/
- local_irq_disable();
ret = kvmppc_prepare_to_enter(vcpu);
- if (ret <= 0) {
- local_irq_enable();
+ if (ret <= 0)
goto out;
- }
/* Save FPU state in stack */
if (current->thread.regs->msr & MSR_FP)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index eb89b83..f7c0111 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -666,10 +666,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
return -EINVAL;
}
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
if (s <= 0) {
- local_irq_enable();
ret = s;
goto out;
}
@@ -1148,14 +1146,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
* aren't already exiting to userspace for some other reason.
*/
if (!(r & RESUME_HOST)) {
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0) {
- local_irq_enable();
+ if (s <= 0)
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
- } else {
+ else
kvmppc_fix_ee_before_entry();
- }
}
return r;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 4e05f8c..f8659aa 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
{
int r = 1;
- WARN_ON_ONCE(!irqs_disabled());
+ WARN_ON(irqs_disabled());
+ hard_irq_disable();
+
while (true) {
if (need_resched()) {
local_irq_enable();
cond_resched();
- local_irq_disable();
+ hard_irq_disable();
continue;
}
@@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
local_irq_enable();
trace_kvm_check_requests(vcpu);
r = kvmppc_core_check_requests(vcpu);
- local_irq_disable();
+ hard_irq_disable();
if (r > 0)
continue;
break;
@@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
}
#ifdef CONFIG_PPC64
- /* lazy EE magic */
- hard_irq_disable();
- if (lazy_irq_pending()) {
- /* Got an interrupt in between, try again */
- local_irq_enable();
- local_irq_disable();
- kvm_guest_exit();
- continue;
- }
+ WARN_ON(lazy_irq_pending());
#endif
kvm_guest_enter();
- break;
+ return r;
}
+ local_irq_enable();
return r;
}
#endif /* CONFIG_KVM_BOOK3S_64_HV */
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: Scott Wood, linuxppc-dev, kvm, kvm-ppc
Simplify the handling of lazy EE by going directly from fully-enabled
to hard-disabled. This replaces the lazy_irq_pending() check
(including its misplaced kvm_guest_exit() call).
As suggested by Tiejun Chen, move the interrupt disabling into
kvmppc_prepare_to_enter() rather than have each caller do it. Also
move the IRQ enabling on heavyweight exit into
kvmppc_prepare_to_enter().
Don't move kvmppc_fix_ee_before_entry() into kvmppc_prepare_to_enter(),
so that the caller can avoid marking interrupts enabled earlier than
necessary (e.g. book3s_pr waits until after FP save/restore is done).
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 6 ++++++
arch/powerpc/kvm/book3s_pr.c | 12 +++---------
arch/powerpc/kvm/booke.c | 9 ++-------
arch/powerpc/kvm/powerpc.c | 21 ++++++++-------------
4 files changed, 19 insertions(+), 29 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 6885846..e4474f8 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -404,6 +404,12 @@ static inline void kvmppc_fix_ee_before_entry(void)
trace_hardirqs_on();
#ifdef CONFIG_PPC64
+ /*
+ * To avoid races, the caller must have gone directly from having
+ * interrupts fully-enabled to hard-disabled.
+ */
+ WARN_ON(local_paca->irq_happened != PACA_IRQ_HARD_DIS);
+
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
local_paca->soft_enabled = 1;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 0b97ce4..e61e39e 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -884,14 +884,11 @@ program_interrupt:
* and if we really did time things so badly, then we just exit
* again due to a host external interrupt.
*/
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0) {
- local_irq_enable();
+ if (s <= 0)
r = s;
- } else {
+ else
kvmppc_fix_ee_before_entry();
- }
}
trace_kvm_book3s_reenter(r, vcpu);
@@ -1121,12 +1118,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
* really did time things so badly, then we just exit again due to
* a host external interrupt.
*/
- local_irq_disable();
ret = kvmppc_prepare_to_enter(vcpu);
- if (ret <= 0) {
- local_irq_enable();
+ if (ret <= 0)
goto out;
- }
/* Save FPU state in stack */
if (current->thread.regs->msr & MSR_FP)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index eb89b83..f7c0111 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -666,10 +666,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
return -EINVAL;
}
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
if (s <= 0) {
- local_irq_enable();
ret = s;
goto out;
}
@@ -1148,14 +1146,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
* aren't already exiting to userspace for some other reason.
*/
if (!(r & RESUME_HOST)) {
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0) {
- local_irq_enable();
+ if (s <= 0)
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
- } else {
+ else
kvmppc_fix_ee_before_entry();
- }
}
return r;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 4e05f8c..f8659aa 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
{
int r = 1;
- WARN_ON_ONCE(!irqs_disabled());
+ WARN_ON(irqs_disabled());
+ hard_irq_disable();
+
while (true) {
if (need_resched()) {
local_irq_enable();
cond_resched();
- local_irq_disable();
+ hard_irq_disable();
continue;
}
@@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
local_irq_enable();
trace_kvm_check_requests(vcpu);
r = kvmppc_core_check_requests(vcpu);
- local_irq_disable();
+ hard_irq_disable();
if (r > 0)
continue;
break;
@@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
}
#ifdef CONFIG_PPC64
- /* lazy EE magic */
- hard_irq_disable();
- if (lazy_irq_pending()) {
- /* Got an interrupt in between, try again */
- local_irq_enable();
- local_irq_disable();
- kvm_guest_exit();
- continue;
- }
+ WARN_ON(lazy_irq_pending());
#endif
kvm_guest_enter();
- break;
+ return r;
}
+ local_irq_enable();
return r;
}
#endif /* CONFIG_KVM_BOOK3S_64_HV */
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 3:09 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 3:09 UTC (permalink / raw)
To: Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Scott Wood
Simplify the handling of lazy EE by going directly from fully-enabled
to hard-disabled. This replaces the lazy_irq_pending() check
(including its misplaced kvm_guest_exit() call).
As suggested by Tiejun Chen, move the interrupt disabling into
kvmppc_prepare_to_enter() rather than have each caller do it. Also
move the IRQ enabling on heavyweight exit into
kvmppc_prepare_to_enter().
Don't move kvmppc_fix_ee_before_entry() into kvmppc_prepare_to_enter(),
so that the caller can avoid marking interrupts enabled earlier than
necessary (e.g. book3s_pr waits until after FP save/restore is done).
Signed-off-by: Scott Wood <scottwood@freescale.com>
---
arch/powerpc/include/asm/kvm_ppc.h | 6 ++++++
arch/powerpc/kvm/book3s_pr.c | 12 +++---------
arch/powerpc/kvm/booke.c | 9 ++-------
arch/powerpc/kvm/powerpc.c | 21 ++++++++-------------
4 files changed, 19 insertions(+), 29 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 6885846..e4474f8 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -404,6 +404,12 @@ static inline void kvmppc_fix_ee_before_entry(void)
trace_hardirqs_on();
#ifdef CONFIG_PPC64
+ /*
+ * To avoid races, the caller must have gone directly from having
+ * interrupts fully-enabled to hard-disabled.
+ */
+ WARN_ON(local_paca->irq_happened != PACA_IRQ_HARD_DIS);
+
/* Only need to enable IRQs by hard enabling them after this */
local_paca->irq_happened = 0;
local_paca->soft_enabled = 1;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 0b97ce4..e61e39e 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -884,14 +884,11 @@ program_interrupt:
* and if we really did time things so badly, then we just exit
* again due to a host external interrupt.
*/
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0) {
- local_irq_enable();
+ if (s <= 0)
r = s;
- } else {
+ else
kvmppc_fix_ee_before_entry();
- }
}
trace_kvm_book3s_reenter(r, vcpu);
@@ -1121,12 +1118,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
* really did time things so badly, then we just exit again due to
* a host external interrupt.
*/
- local_irq_disable();
ret = kvmppc_prepare_to_enter(vcpu);
- if (ret <= 0) {
- local_irq_enable();
+ if (ret <= 0)
goto out;
- }
/* Save FPU state in stack */
if (current->thread.regs->msr & MSR_FP)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index eb89b83..f7c0111 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -666,10 +666,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
return -EINVAL;
}
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
if (s <= 0) {
- local_irq_enable();
ret = s;
goto out;
}
@@ -1148,14 +1146,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
* aren't already exiting to userspace for some other reason.
*/
if (!(r & RESUME_HOST)) {
- local_irq_disable();
s = kvmppc_prepare_to_enter(vcpu);
- if (s <= 0) {
- local_irq_enable();
+ if (s <= 0)
r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
- } else {
+ else
kvmppc_fix_ee_before_entry();
- }
}
return r;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 4e05f8c..f8659aa 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
{
int r = 1;
- WARN_ON_ONCE(!irqs_disabled());
+ WARN_ON(irqs_disabled());
+ hard_irq_disable();
+
while (true) {
if (need_resched()) {
local_irq_enable();
cond_resched();
- local_irq_disable();
+ hard_irq_disable();
continue;
}
@@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
local_irq_enable();
trace_kvm_check_requests(vcpu);
r = kvmppc_core_check_requests(vcpu);
- local_irq_disable();
+ hard_irq_disable();
if (r > 0)
continue;
break;
@@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
}
#ifdef CONFIG_PPC64
- /* lazy EE magic */
- hard_irq_disable();
- if (lazy_irq_pending()) {
- /* Got an interrupt in between, try again */
- local_irq_enable();
- local_irq_disable();
- kvm_guest_exit();
- continue;
- }
+ WARN_ON(lazy_irq_pending());
#endif
kvm_guest_enter();
- break;
+ return r;
}
+ local_irq_enable();
return r;
}
#endif /* CONFIG_KVM_BOOK3S_64_HV */
--
1.7.10.4
^ permalink raw reply related [flat|nested] 38+ messages in thread
* RE: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
2013-05-10 3:09 ` Scott Wood
@ 2013-05-10 3:34 ` Bhushan Bharat-R65777
-1 siblings, 0 replies; 38+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-05-10 3:34 UTC (permalink / raw)
To: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt
Cc: kvm-ppc, kvm, linuxppc-dev, Wood Scott-B07421
> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On Behalf Of
> Scott Wood
> Sent: Friday, May 10, 2013 8:40 AM
> To: Alexander Graf; Benjamin Herrenschmidt
> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
> Wood Scott-B07421
> Subject: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
>
> Currently this is only being done on 64-bit. Rather than just move it
> out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
> consistent with lazy ee state, and so that we don't track more host
> code as interrupts-enabled than necessary.
>
> Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
> that this function now has a role on 32-bit as well.
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
> arch/powerpc/kvm/book3s_pr.c | 4 ++--
> arch/powerpc/kvm/booke.c | 4 ++--
> arch/powerpc/kvm/powerpc.c | 2 --
> 4 files changed, 12 insertions(+), 9 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index a5287fe..6885846 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
> }
> }
>
> -/* Please call after prepare_to_enter. This function puts the lazy ee state
> - back to normal mode, without actually enabling interrupts. */
> -static inline void kvmppc_lazy_ee_enable(void)
> +/*
> + * Please call after prepare_to_enter. This function puts the lazy ee and irq
> + * disabled tracking state back to normal mode, without actually enabling
> + * interrupts.
> + */
> +static inline void kvmppc_fix_ee_before_entry(void)
> {
> + trace_hardirqs_on();
> +
> #ifdef CONFIG_PPC64
> /* Only need to enable IRQs by hard enabling them after this */
> local_paca->irq_happened = 0;
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index bdc40b8..0b97ce4 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -890,7 +890,7 @@ program_interrupt:
> local_irq_enable();
> r = s;
> } else {
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
> }
> }
>
> @@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
> kvm_vcpu *vcpu)
> if (vcpu->arch.shared->msr & MSR_FP)
> kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
>
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
>
> ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index 705fc5c..eb89b83 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu
> *vcpu)
> ret = s;
> goto out;
> }
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
local_irq_disable() is called before kvmppc_prepare_to_enter().
Now we put the irq_happend and soft_enabled back to previous state without checking for any interrupt happened in between. If any interrupt happens in between, will not that be lost?
-Bharat
>
> kvm_guest_enter();
>
> @@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct
> kvm_vcpu *vcpu,
> local_irq_enable();
> r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
> } else {
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
> }
> }
>
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 6316ee3..4e05f8c 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> kvm_guest_exit();
> continue;
> }
> -
> - trace_hardirqs_on();
> #endif
>
> kvm_guest_enter();
> --
> 1.7.10.4
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
@ 2013-05-10 3:34 ` Bhushan Bharat-R65777
0 siblings, 0 replies; 38+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-05-10 3:34 UTC (permalink / raw)
To: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt
Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc
> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On Beh=
alf Of
> Scott Wood
> Sent: Friday, May 10, 2013 8:40 AM
> To: Alexander Graf; Benjamin Herrenschmidt
> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozla=
bs.org;
> Wood Scott-B07421
> Subject: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
>=20
> Currently this is only being done on 64-bit. Rather than just move it
> out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
> consistent with lazy ee state, and so that we don't track more host
> code as interrupts-enabled than necessary.
>=20
> Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
> that this function now has a role on 32-bit as well.
>=20
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
> arch/powerpc/kvm/book3s_pr.c | 4 ++--
> arch/powerpc/kvm/booke.c | 4 ++--
> arch/powerpc/kvm/powerpc.c | 2 --
> 4 files changed, 12 insertions(+), 9 deletions(-)
>=20
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index a5287fe..6885846 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pf=
n)
> }
> }
>=20
> -/* Please call after prepare_to_enter. This function puts the lazy ee st=
ate
> - back to normal mode, without actually enabling interrupts. */
> -static inline void kvmppc_lazy_ee_enable(void)
> +/*
> + * Please call after prepare_to_enter. This function puts the lazy ee an=
d irq
> + * disabled tracking state back to normal mode, without actually enablin=
g
> + * interrupts.
> + */
> +static inline void kvmppc_fix_ee_before_entry(void)
> {
> + trace_hardirqs_on();
> +
> #ifdef CONFIG_PPC64
> /* Only need to enable IRQs by hard enabling them after this */
> local_paca->irq_happened =3D 0;
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index bdc40b8..0b97ce4 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -890,7 +890,7 @@ program_interrupt:
> local_irq_enable();
> r =3D s;
> } else {
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
> }
> }
>=20
> @@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
> kvm_vcpu *vcpu)
> if (vcpu->arch.shared->msr & MSR_FP)
> kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
>=20
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
>=20
> ret =3D __kvmppc_vcpu_run(kvm_run, vcpu);
>=20
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index 705fc5c..eb89b83 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct k=
vm_vcpu
> *vcpu)
> ret =3D s;
> goto out;
> }
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
local_irq_disable() is called before kvmppc_prepare_to_enter().
Now we put the irq_happend and soft_enabled back to previous state without =
checking for any interrupt happened in between. If any interrupt happens in=
between, will not that be lost?
-Bharat
>=20
> kvm_guest_enter();
>=20
> @@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct
> kvm_vcpu *vcpu,
> local_irq_enable();
> r =3D (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
> } else {
> - kvmppc_lazy_ee_enable();
> + kvmppc_fix_ee_before_entry();
> }
> }
>=20
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 6316ee3..4e05f8c 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> kvm_guest_exit();
> continue;
> }
> -
> - trace_hardirqs_on();
> #endif
>=20
> kvm_guest_enter();
> --
> 1.7.10.4
>=20
>=20
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
2013-05-10 3:34 ` Bhushan Bharat-R65777
(?)
@ 2013-05-10 4:40 ` tiejun.chen
-1 siblings, 0 replies; 38+ messages in thread
From: tiejun.chen @ 2013-05-10 4:40 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
linuxppc-dev, kvm, kvm-ppc
On 05/10/2013 11:34 AM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On Behalf Of
>> Scott Wood
>> Sent: Friday, May 10, 2013 8:40 AM
>> To: Alexander Graf; Benjamin Herrenschmidt
>> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
>> Wood Scott-B07421
>> Subject: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
>>
>> Currently this is only being done on 64-bit. Rather than just move it
>> out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
>> consistent with lazy ee state, and so that we don't track more host
>> code as interrupts-enabled than necessary.
>>
>> Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
>> that this function now has a role on 32-bit as well.
>>
>> Signed-off-by: Scott Wood <scottwood@freescale.com>
>> ---
>> arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
>> arch/powerpc/kvm/book3s_pr.c | 4 ++--
>> arch/powerpc/kvm/booke.c | 4 ++--
>> arch/powerpc/kvm/powerpc.c | 2 --
>> 4 files changed, 12 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
>> b/arch/powerpc/include/asm/kvm_ppc.h
>> index a5287fe..6885846 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
>> }
>> }
>>
>> -/* Please call after prepare_to_enter. This function puts the lazy ee state
>> - back to normal mode, without actually enabling interrupts. */
>> -static inline void kvmppc_lazy_ee_enable(void)
>> +/*
>> + * Please call after prepare_to_enter. This function puts the lazy ee and irq
>> + * disabled tracking state back to normal mode, without actually enabling
>> + * interrupts.
>> + */
>> +static inline void kvmppc_fix_ee_before_entry(void)
>> {
>> + trace_hardirqs_on();
>> +
>> #ifdef CONFIG_PPC64
>> /* Only need to enable IRQs by hard enabling them after this */
>> local_paca->irq_happened = 0;
>> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
>> index bdc40b8..0b97ce4 100644
>> --- a/arch/powerpc/kvm/book3s_pr.c
>> +++ b/arch/powerpc/kvm/book3s_pr.c
>> @@ -890,7 +890,7 @@ program_interrupt:
>> local_irq_enable();
>> r = s;
>> } else {
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>> }
>> }
>>
>> @@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
>> kvm_vcpu *vcpu)
>> if (vcpu->arch.shared->msr & MSR_FP)
>> kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
>>
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>>
>> ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>>
>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>> index 705fc5c..eb89b83 100644
>> --- a/arch/powerpc/kvm/booke.c
>> +++ b/arch/powerpc/kvm/booke.c
>> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu
>> *vcpu)
>> ret = s;
>> goto out;
>> }
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>
> local_irq_disable() is called before kvmppc_prepare_to_enter().
In patch 4, we call hard_irq_disable() once enter kvmppc_prepare_to_enter().
Tiejun
> Now we put the irq_happend and soft_enabled back to previous state without checking for any interrupt happened in between. If any interrupt happens in between, will not that be lost?
>
> -Bharat
>
>>
>> kvm_guest_enter();
>>
>> @@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct
>> kvm_vcpu *vcpu,
>> local_irq_enable();
>> r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
>> } else {
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>> }
>> }
>>
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 6316ee3..4e05f8c 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> kvm_guest_exit();
>> continue;
>> }
>> -
>> - trace_hardirqs_on();
>> #endif
>>
>> kvm_guest_enter();
>> --
>> 1.7.10.4
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
@ 2013-05-10 4:40 ` tiejun.chen
0 siblings, 0 replies; 38+ messages in thread
From: tiejun.chen @ 2013-05-10 4:40 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc, linuxppc-dev
On 05/10/2013 11:34 AM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On Behalf Of
>> Scott Wood
>> Sent: Friday, May 10, 2013 8:40 AM
>> To: Alexander Graf; Benjamin Herrenschmidt
>> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
>> Wood Scott-B07421
>> Subject: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
>>
>> Currently this is only being done on 64-bit. Rather than just move it
>> out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
>> consistent with lazy ee state, and so that we don't track more host
>> code as interrupts-enabled than necessary.
>>
>> Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
>> that this function now has a role on 32-bit as well.
>>
>> Signed-off-by: Scott Wood <scottwood@freescale.com>
>> ---
>> arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
>> arch/powerpc/kvm/book3s_pr.c | 4 ++--
>> arch/powerpc/kvm/booke.c | 4 ++--
>> arch/powerpc/kvm/powerpc.c | 2 --
>> 4 files changed, 12 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
>> b/arch/powerpc/include/asm/kvm_ppc.h
>> index a5287fe..6885846 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
>> }
>> }
>>
>> -/* Please call after prepare_to_enter. This function puts the lazy ee state
>> - back to normal mode, without actually enabling interrupts. */
>> -static inline void kvmppc_lazy_ee_enable(void)
>> +/*
>> + * Please call after prepare_to_enter. This function puts the lazy ee and irq
>> + * disabled tracking state back to normal mode, without actually enabling
>> + * interrupts.
>> + */
>> +static inline void kvmppc_fix_ee_before_entry(void)
>> {
>> + trace_hardirqs_on();
>> +
>> #ifdef CONFIG_PPC64
>> /* Only need to enable IRQs by hard enabling them after this */
>> local_paca->irq_happened = 0;
>> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
>> index bdc40b8..0b97ce4 100644
>> --- a/arch/powerpc/kvm/book3s_pr.c
>> +++ b/arch/powerpc/kvm/book3s_pr.c
>> @@ -890,7 +890,7 @@ program_interrupt:
>> local_irq_enable();
>> r = s;
>> } else {
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>> }
>> }
>>
>> @@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
>> kvm_vcpu *vcpu)
>> if (vcpu->arch.shared->msr & MSR_FP)
>> kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
>>
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>>
>> ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>>
>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>> index 705fc5c..eb89b83 100644
>> --- a/arch/powerpc/kvm/booke.c
>> +++ b/arch/powerpc/kvm/booke.c
>> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu
>> *vcpu)
>> ret = s;
>> goto out;
>> }
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>
> local_irq_disable() is called before kvmppc_prepare_to_enter().
In patch 4, we call hard_irq_disable() once enter kvmppc_prepare_to_enter().
Tiejun
> Now we put the irq_happend and soft_enabled back to previous state without checking for any interrupt happened in between. If any interrupt happens in between, will not that be lost?
>
> -Bharat
>
>>
>> kvm_guest_enter();
>>
>> @@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct
>> kvm_vcpu *vcpu,
>> local_irq_enable();
>> r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
>> } else {
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>> }
>> }
>>
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 6316ee3..4e05f8c 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> kvm_guest_exit();
>> continue;
>> }
>> -
>> - trace_hardirqs_on();
>> #endif
>>
>> kvm_guest_enter();
>> --
>> 1.7.10.4
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
@ 2013-05-10 4:40 ` tiejun.chen
0 siblings, 0 replies; 38+ messages in thread
From: tiejun.chen @ 2013-05-10 4:40 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
linuxppc-dev, kvm, kvm-ppc
On 05/10/2013 11:34 AM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On Behalf Of
>> Scott Wood
>> Sent: Friday, May 10, 2013 8:40 AM
>> To: Alexander Graf; Benjamin Herrenschmidt
>> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
>> Wood Scott-B07421
>> Subject: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
>>
>> Currently this is only being done on 64-bit. Rather than just move it
>> out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
>> consistent with lazy ee state, and so that we don't track more host
>> code as interrupts-enabled than necessary.
>>
>> Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect
>> that this function now has a role on 32-bit as well.
>>
>> Signed-off-by: Scott Wood <scottwood@freescale.com>
>> ---
>> arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++---
>> arch/powerpc/kvm/book3s_pr.c | 4 ++--
>> arch/powerpc/kvm/booke.c | 4 ++--
>> arch/powerpc/kvm/powerpc.c | 2 --
>> 4 files changed, 12 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
>> b/arch/powerpc/include/asm/kvm_ppc.h
>> index a5287fe..6885846 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn)
>> }
>> }
>>
>> -/* Please call after prepare_to_enter. This function puts the lazy ee state
>> - back to normal mode, without actually enabling interrupts. */
>> -static inline void kvmppc_lazy_ee_enable(void)
>> +/*
>> + * Please call after prepare_to_enter. This function puts the lazy ee and irq
>> + * disabled tracking state back to normal mode, without actually enabling
>> + * interrupts.
>> + */
>> +static inline void kvmppc_fix_ee_before_entry(void)
>> {
>> + trace_hardirqs_on();
>> +
>> #ifdef CONFIG_PPC64
>> /* Only need to enable IRQs by hard enabling them after this */
>> local_paca->irq_happened = 0;
>> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
>> index bdc40b8..0b97ce4 100644
>> --- a/arch/powerpc/kvm/book3s_pr.c
>> +++ b/arch/powerpc/kvm/book3s_pr.c
>> @@ -890,7 +890,7 @@ program_interrupt:
>> local_irq_enable();
>> r = s;
>> } else {
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>> }
>> }
>>
>> @@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
>> kvm_vcpu *vcpu)
>> if (vcpu->arch.shared->msr & MSR_FP)
>> kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
>>
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>>
>> ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>>
>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>> index 705fc5c..eb89b83 100644
>> --- a/arch/powerpc/kvm/booke.c
>> +++ b/arch/powerpc/kvm/booke.c
>> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu
>> *vcpu)
>> ret = s;
>> goto out;
>> }
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>
> local_irq_disable() is called before kvmppc_prepare_to_enter().
In patch 4, we call hard_irq_disable() once enter kvmppc_prepare_to_enter().
Tiejun
> Now we put the irq_happend and soft_enabled back to previous state without checking for any interrupt happened in between. If any interrupt happens in between, will not that be lost?
>
> -Bharat
>
>>
>> kvm_guest_enter();
>>
>> @@ -1154,7 +1154,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct
>> kvm_vcpu *vcpu,
>> local_irq_enable();
>> r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
>> } else {
>> - kvmppc_lazy_ee_enable();
>> + kvmppc_fix_ee_before_entry();
>> }
>> }
>>
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 6316ee3..4e05f8c 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> kvm_guest_exit();
>> continue;
>> }
>> -
>> - trace_hardirqs_on();
>> #endif
>>
>> kvm_guest_enter();
>> --
>> 1.7.10.4
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
2013-05-10 3:09 ` Scott Wood
@ 2013-05-10 5:01 ` Bhushan Bharat-R65777
-1 siblings, 0 replies; 38+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-05-10 5:01 UTC (permalink / raw)
To: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt
Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc
> -----Original Message-----
> From: kvm-ppc-owner@vger.kernel.org [mailto:kvm-ppc-owner@vger.kernel.org] On
> Behalf Of Scott Wood
> Sent: Friday, May 10, 2013 8:40 AM
> To: Alexander Graf; Benjamin Herrenschmidt
> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
> Wood Scott-B07421
> Subject: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in
> kvmppc_handle_exit()
>
> EE is hard-disabled on entry to kvmppc_handle_exit(), so call
> hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and soft_enabled
> is unset.
>
> Without this, we get warnings such as arch/powerpc/kernel/time.c:300,
> and sometimes host kernel hangs.
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> arch/powerpc/kvm/booke.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index 1020119..705fc5c 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu
> *vcpu,
> int r = RESUME_HOST;
> int s;
>
> +#ifdef CONFIG_PPC64
> + WARN_ON(local_paca->irq_happened != 0);
> +#endif
> + hard_irq_disable();
It is not actually to hard disable as EE is already clear but to make it looks like hard_disable to host. Right?
If so, should we write a comment here on why we are doing this?
-Bharat
> +
> /* update before a new last_exit_type is rewritten */
> kvmppc_update_timing_stats(vcpu);
>
> --
> 1.7.10.4
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
@ 2013-05-10 5:01 ` Bhushan Bharat-R65777
0 siblings, 0 replies; 38+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-05-10 5:01 UTC (permalink / raw)
To: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt
Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc
> -----Original Message-----
> From: kvm-ppc-owner@vger.kernel.org [mailto:kvm-ppc-owner@vger.kernel.org=
] On
> Behalf Of Scott Wood
> Sent: Friday, May 10, 2013 8:40 AM
> To: Alexander Graf; Benjamin Herrenschmidt
> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozla=
bs.org;
> Wood Scott-B07421
> Subject: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in
> kvmppc_handle_exit()
>=20
> EE is hard-disabled on entry to kvmppc_handle_exit(), so call
> hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and soft_enabled
> is unset.
>=20
> Without this, we get warnings such as arch/powerpc/kernel/time.c:300,
> and sometimes host kernel hangs.
>=20
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> arch/powerpc/kvm/booke.c | 5 +++++
> 1 file changed, 5 insertions(+)
>=20
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index 1020119..705fc5c 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct k=
vm_vcpu
> *vcpu,
> int r =3D RESUME_HOST;
> int s;
>=20
> +#ifdef CONFIG_PPC64
> + WARN_ON(local_paca->irq_happened !=3D 0);
> +#endif
> + hard_irq_disable();
It is not actually to hard disable as EE is already clear but to make it lo=
oks like hard_disable to host. Right?
If so, should we write a comment here on why we are doing this?=20
-Bharat
> +
> /* update before a new last_exit_type is rewritten */
> kvmppc_update_timing_stats(vcpu);
>=20
> --
> 1.7.10.4
>=20
>=20
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
2013-05-10 3:09 ` Scott Wood
@ 2013-05-10 5:01 ` Bhushan Bharat-R65777
-1 siblings, 0 replies; 38+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-05-10 5:01 UTC (permalink / raw)
To: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt
Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc
> -----Original Message-----
> From: kvm-ppc-owner@vger.kernel.org [mailto:kvm-ppc-owner@vger.kernel.org] On
> Behalf Of Scott Wood
> Sent: Friday, May 10, 2013 8:40 AM
> To: Alexander Graf; Benjamin Herrenschmidt
> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
> Wood Scott-B07421
> Subject: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
>
> Simplify the handling of lazy EE by going directly from fully-enabled
> to hard-disabled. This replaces the lazy_irq_pending() check
> (including its misplaced kvm_guest_exit() call).
>
> As suggested by Tiejun Chen, move the interrupt disabling into
> kvmppc_prepare_to_enter() rather than have each caller do it. Also
> move the IRQ enabling on heavyweight exit into
> kvmppc_prepare_to_enter().
>
> Don't move kvmppc_fix_ee_before_entry() into kvmppc_prepare_to_enter(),
> so that the caller can avoid marking interrupts enabled earlier than
> necessary (e.g. book3s_pr waits until after FP save/restore is done).
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> arch/powerpc/include/asm/kvm_ppc.h | 6 ++++++
> arch/powerpc/kvm/book3s_pr.c | 12 +++---------
> arch/powerpc/kvm/booke.c | 9 ++-------
> arch/powerpc/kvm/powerpc.c | 21 ++++++++-------------
> 4 files changed, 19 insertions(+), 29 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index 6885846..e4474f8 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -404,6 +404,12 @@ static inline void kvmppc_fix_ee_before_entry(void)
> trace_hardirqs_on();
>
> #ifdef CONFIG_PPC64
> + /*
> + * To avoid races, the caller must have gone directly from having
> + * interrupts fully-enabled to hard-disabled.
> + */
> + WARN_ON(local_paca->irq_happened != PACA_IRQ_HARD_DIS);
> +
> /* Only need to enable IRQs by hard enabling them after this */
> local_paca->irq_happened = 0;
> local_paca->soft_enabled = 1;
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index 0b97ce4..e61e39e 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -884,14 +884,11 @@ program_interrupt:
> * and if we really did time things so badly, then we just exit
> * again due to a host external interrupt.
> */
> - local_irq_disable();
> s = kvmppc_prepare_to_enter(vcpu);
> - if (s <= 0) {
> - local_irq_enable();
> + if (s <= 0)
> r = s;
> - } else {
> + else
> kvmppc_fix_ee_before_entry();
> - }
> }
>
> trace_kvm_book3s_reenter(r, vcpu);
> @@ -1121,12 +1118,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
> kvm_vcpu *vcpu)
> * really did time things so badly, then we just exit again due to
> * a host external interrupt.
> */
> - local_irq_disable();
> ret = kvmppc_prepare_to_enter(vcpu);
> - if (ret <= 0) {
> - local_irq_enable();
> + if (ret <= 0)
> goto out;
> - }
>
> /* Save FPU state in stack */
> if (current->thread.regs->msr & MSR_FP)
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index eb89b83..f7c0111 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -666,10 +666,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
> kvm_vcpu *vcpu)
> return -EINVAL;
> }
>
> - local_irq_disable();
> s = kvmppc_prepare_to_enter(vcpu);
> if (s <= 0) {
> - local_irq_enable();
> ret = s;
> goto out;
> }
> @@ -1148,14 +1146,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struct
> kvm_vcpu *vcpu,
> * aren't already exiting to userspace for some other reason.
> */
> if (!(r & RESUME_HOST)) {
> - local_irq_disable();
Ok, Now we do not soft disable before kvmppc_prapare_to_enter().
> s = kvmppc_prepare_to_enter(vcpu);
> - if (s <= 0) {
> - local_irq_enable();
> + if (s <= 0)
> r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
> - } else {
> + else
> kvmppc_fix_ee_before_entry();
> - }
> }
>
> return r;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 4e05f8c..f8659aa 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r = 1;
>
> - WARN_ON_ONCE(!irqs_disabled());
> + WARN_ON(irqs_disabled());
> + hard_irq_disable();
Here we hard disable in kvmppc_prepare_to_enter(), so my comment in other patch about interrupt loss is no more valid.
So here
MSR.EE = 0
local_paca->soft_enabled = 0
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
> +
> while (true) {
> if (need_resched()) {
> local_irq_enable();
This will make the state:
MSR.EE = 1
local_paca->soft_enabled = 1
local_paca->irq_happened = PACA_IRQ_HARD_DIS; //same as before
Is that a valid state where interrupts are fully enabled and irq_happend in not 0?
> cond_resched();
> - local_irq_disable();
> + hard_irq_disable();
> continue;
> }
>
> @@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> local_irq_enable();
> trace_kvm_check_requests(vcpu);
> r = kvmppc_core_check_requests(vcpu);
> - local_irq_disable();
> + hard_irq_disable();
> if (r > 0)
> continue;
> break;
> @@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> }
>
> #ifdef CONFIG_PPC64
> - /* lazy EE magic */
> - hard_irq_disable();
> - if (lazy_irq_pending()) {
> - /* Got an interrupt in between, try again */
> - local_irq_enable();
> - local_irq_disable();
> - kvm_guest_exit();
> - continue;
> - }
> + WARN_ON(lazy_irq_pending());
> #endif
>
> kvm_guest_enter();
> - break;
> + return r;
> }
>
> + local_irq_enable();
> return r;
> }
int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
{
int r = 0;
WARN_ON_ONCE(!irqs_disabled());
kvmppc_core_check_exceptions(vcpu);
if (vcpu->requests) {
/* Exception delivery raised request; start over */
return 1;
}
if (vcpu->arch.shared->msr & MSR_WE) {
local_irq_enable();
kvm_vcpu_block(vcpu);
clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
local_irq_disable();
^^^
We do not require hard_irq_disable() here?
-Bharat
> #endif /* CONFIG_KVM_BOOK3S_64_HV */
> --
> 1.7.10.4
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 5:01 ` Bhushan Bharat-R65777
0 siblings, 0 replies; 38+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-05-10 5:01 UTC (permalink / raw)
To: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt
Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc
> -----Original Message-----
> From: kvm-ppc-owner@vger.kernel.org [mailto:kvm-ppc-owner@vger.kernel.org=
] On
> Behalf Of Scott Wood
> Sent: Friday, May 10, 2013 8:40 AM
> To: Alexander Graf; Benjamin Herrenschmidt
> Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; linuxppc-dev@lists.ozla=
bs.org;
> Wood Scott-B07421
> Subject: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
>=20
> Simplify the handling of lazy EE by going directly from fully-enabled
> to hard-disabled. This replaces the lazy_irq_pending() check
> (including its misplaced kvm_guest_exit() call).
>=20
> As suggested by Tiejun Chen, move the interrupt disabling into
> kvmppc_prepare_to_enter() rather than have each caller do it. Also
> move the IRQ enabling on heavyweight exit into
> kvmppc_prepare_to_enter().
>=20
> Don't move kvmppc_fix_ee_before_entry() into kvmppc_prepare_to_enter(),
> so that the caller can avoid marking interrupts enabled earlier than
> necessary (e.g. book3s_pr waits until after FP save/restore is done).
>=20
> Signed-off-by: Scott Wood <scottwood@freescale.com>
> ---
> arch/powerpc/include/asm/kvm_ppc.h | 6 ++++++
> arch/powerpc/kvm/book3s_pr.c | 12 +++---------
> arch/powerpc/kvm/booke.c | 9 ++-------
> arch/powerpc/kvm/powerpc.c | 21 ++++++++-------------
> 4 files changed, 19 insertions(+), 29 deletions(-)
>=20
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index 6885846..e4474f8 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -404,6 +404,12 @@ static inline void kvmppc_fix_ee_before_entry(void)
> trace_hardirqs_on();
>=20
> #ifdef CONFIG_PPC64
> + /*
> + * To avoid races, the caller must have gone directly from having
> + * interrupts fully-enabled to hard-disabled.
> + */
> + WARN_ON(local_paca->irq_happened !=3D PACA_IRQ_HARD_DIS);
> +
> /* Only need to enable IRQs by hard enabling them after this */
> local_paca->irq_happened =3D 0;
> local_paca->soft_enabled =3D 1;
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index 0b97ce4..e61e39e 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -884,14 +884,11 @@ program_interrupt:
> * and if we really did time things so badly, then we just exit
> * again due to a host external interrupt.
> */
> - local_irq_disable();
> s =3D kvmppc_prepare_to_enter(vcpu);
> - if (s <=3D 0) {
> - local_irq_enable();
> + if (s <=3D 0)
> r =3D s;
> - } else {
> + else
> kvmppc_fix_ee_before_entry();
> - }
> }
>=20
> trace_kvm_book3s_reenter(r, vcpu);
> @@ -1121,12 +1118,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struc=
t
> kvm_vcpu *vcpu)
> * really did time things so badly, then we just exit again due to
> * a host external interrupt.
> */
> - local_irq_disable();
> ret =3D kvmppc_prepare_to_enter(vcpu);
> - if (ret <=3D 0) {
> - local_irq_enable();
> + if (ret <=3D 0)
> goto out;
> - }
>=20
> /* Save FPU state in stack */
> if (current->thread.regs->msr & MSR_FP)
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index eb89b83..f7c0111 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -666,10 +666,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct
> kvm_vcpu *vcpu)
> return -EINVAL;
> }
>=20
> - local_irq_disable();
> s =3D kvmppc_prepare_to_enter(vcpu);
> if (s <=3D 0) {
> - local_irq_enable();
> ret =3D s;
> goto out;
> }
> @@ -1148,14 +1146,11 @@ int kvmppc_handle_exit(struct kvm_run *run, struc=
t
> kvm_vcpu *vcpu,
> * aren't already exiting to userspace for some other reason.
> */
> if (!(r & RESUME_HOST)) {
> - local_irq_disable();
Ok, Now we do not soft disable before kvmppc_prapare_to_enter().
> s =3D kvmppc_prepare_to_enter(vcpu);
> - if (s <=3D 0) {
> - local_irq_enable();
> + if (s <=3D 0)
> r =3D (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV);
> - } else {
> + else
> kvmppc_fix_ee_before_entry();
> - }
> }
>=20
> return r;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 4e05f8c..f8659aa 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r =3D 1;
>=20
> - WARN_ON_ONCE(!irqs_disabled());
> + WARN_ON(irqs_disabled());
> + hard_irq_disable();
Here we hard disable in kvmppc_prepare_to_enter(), so my comment in other p=
atch about interrupt loss is no more valid.
So here
MSR.EE =3D 0
local_paca->soft_enabled =3D 0
local_paca->irq_happened |=3D PACA_IRQ_HARD_DIS;
> +
> while (true) {
> if (need_resched()) {
> local_irq_enable();
This will make the state:
MSR.EE =3D 1
local_paca->soft_enabled =3D 1
local_paca->irq_happened =3D PACA_IRQ_HARD_DIS; //same as before
Is that a valid state where interrupts are fully enabled and irq_happend in=
not 0?
> cond_resched();
> - local_irq_disable();
> + hard_irq_disable();
> continue;
> }
>=20
> @@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> local_irq_enable();
> trace_kvm_check_requests(vcpu);
> r =3D kvmppc_core_check_requests(vcpu);
> - local_irq_disable();
> + hard_irq_disable();
> if (r > 0)
> continue;
> break;
> @@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
> }
>=20
> #ifdef CONFIG_PPC64
> - /* lazy EE magic */
> - hard_irq_disable();
> - if (lazy_irq_pending()) {
> - /* Got an interrupt in between, try again */
> - local_irq_enable();
> - local_irq_disable();
> - kvm_guest_exit();
> - continue;
> - }
> + WARN_ON(lazy_irq_pending());
> #endif
>=20
> kvm_guest_enter();
> - break;
> + return r;
> }
>=20
> + local_irq_enable();
> return r;
> }
int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
{
int r =3D 0;
WARN_ON_ONCE(!irqs_disabled());
kvmppc_core_check_exceptions(vcpu);
if (vcpu->requests) {
/* Exception delivery raised request; start over */
return 1;
}
if (vcpu->arch.shared->msr & MSR_WE) {
local_irq_enable();
kvm_vcpu_block(vcpu);
clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
local_irq_disable();
^^^
We do not require hard_irq_disable() here?
-Bharat
> #endif /* CONFIG_KVM_BOOK3S_64_HV */
> --
> 1.7.10.4
>=20
>=20
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
2013-05-10 5:01 ` Bhushan Bharat-R65777
(?)
@ 2013-05-10 5:31 ` tiejun.chen
-1 siblings, 0 replies; 38+ messages in thread
From: tiejun.chen @ 2013-05-10 5:31 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
linuxppc-dev, kvm, kvm-ppc
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 4e05f8c..f8659aa 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> {
>> int r = 1;
>>
>> - WARN_ON_ONCE(!irqs_disabled());
>> + WARN_ON(irqs_disabled());
>> + hard_irq_disable();
>
> Here we hard disable in kvmppc_prepare_to_enter(), so my comment in other patch about interrupt loss is no more valid.
>
> So here
> MSR.EE = 0
> local_paca->soft_enabled = 0
> local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>
>> +
>> while (true) {
>> if (need_resched()) {
>> local_irq_enable();
>
> This will make the state:
> MSR.EE = 1
> local_paca->soft_enabled = 1
> local_paca->irq_happened = PACA_IRQ_HARD_DIS; //same as before
Why is this same the above state? local_irq_enable() can call
__check_irq_replay() to clear PACA_IRQ_HARD_DIS.
>
> Is that a valid state where interrupts are fully enabled and irq_happend in not 0?
>
>> cond_resched();
>> - local_irq_disable();
>> + hard_irq_disable();
>> continue;
>> }
>>
>> @@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> local_irq_enable();
>> trace_kvm_check_requests(vcpu);
>> r = kvmppc_core_check_requests(vcpu);
>> - local_irq_disable();
>> + hard_irq_disable();
>> if (r > 0)
>> continue;
>> break;
>> @@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> }
>>
>> #ifdef CONFIG_PPC64
>> - /* lazy EE magic */
>> - hard_irq_disable();
>> - if (lazy_irq_pending()) {
>> - /* Got an interrupt in between, try again */
>> - local_irq_enable();
>> - local_irq_disable();
>> - kvm_guest_exit();
>> - continue;
>> - }
>> + WARN_ON(lazy_irq_pending());
>> #endif
>>
>> kvm_guest_enter();
>> - break;
>> + return r;
>> }
>>
>> + local_irq_enable();
>> return r;
>> }
>
>
> int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r = 0;
> WARN_ON_ONCE(!irqs_disabled());
>
> kvmppc_core_check_exceptions(vcpu);
>
> if (vcpu->requests) {
> /* Exception delivery raised request; start over */
> return 1;
> }
>
> if (vcpu->arch.shared->msr & MSR_WE) {
> local_irq_enable();
> kvm_vcpu_block(vcpu);
> clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
> local_irq_disable();
> ^^^
> We do not require hard_irq_disable() here?
Between kvmppc_core_prepare_to_enter() and kvmppc_prepare_to_enter(), as I
recall Scott had some discussions with Ben earlier.
Tiejun
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 5:31 ` tiejun.chen
0 siblings, 0 replies; 38+ messages in thread
From: tiejun.chen @ 2013-05-10 5:31 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc, linuxppc-dev
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 4e05f8c..f8659aa 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> {
>> int r = 1;
>>
>> - WARN_ON_ONCE(!irqs_disabled());
>> + WARN_ON(irqs_disabled());
>> + hard_irq_disable();
>
> Here we hard disable in kvmppc_prepare_to_enter(), so my comment in other patch about interrupt loss is no more valid.
>
> So here
> MSR.EE = 0
> local_paca->soft_enabled = 0
> local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>
>> +
>> while (true) {
>> if (need_resched()) {
>> local_irq_enable();
>
> This will make the state:
> MSR.EE = 1
> local_paca->soft_enabled = 1
> local_paca->irq_happened = PACA_IRQ_HARD_DIS; //same as before
Why is this same the above state? local_irq_enable() can call
__check_irq_replay() to clear PACA_IRQ_HARD_DIS.
>
> Is that a valid state where interrupts are fully enabled and irq_happend in not 0?
>
>> cond_resched();
>> - local_irq_disable();
>> + hard_irq_disable();
>> continue;
>> }
>>
>> @@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> local_irq_enable();
>> trace_kvm_check_requests(vcpu);
>> r = kvmppc_core_check_requests(vcpu);
>> - local_irq_disable();
>> + hard_irq_disable();
>> if (r > 0)
>> continue;
>> break;
>> @@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> }
>>
>> #ifdef CONFIG_PPC64
>> - /* lazy EE magic */
>> - hard_irq_disable();
>> - if (lazy_irq_pending()) {
>> - /* Got an interrupt in between, try again */
>> - local_irq_enable();
>> - local_irq_disable();
>> - kvm_guest_exit();
>> - continue;
>> - }
>> + WARN_ON(lazy_irq_pending());
>> #endif
>>
>> kvm_guest_enter();
>> - break;
>> + return r;
>> }
>>
>> + local_irq_enable();
>> return r;
>> }
>
>
> int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r = 0;
> WARN_ON_ONCE(!irqs_disabled());
>
> kvmppc_core_check_exceptions(vcpu);
>
> if (vcpu->requests) {
> /* Exception delivery raised request; start over */
> return 1;
> }
>
> if (vcpu->arch.shared->msr & MSR_WE) {
> local_irq_enable();
> kvm_vcpu_block(vcpu);
> clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
> local_irq_disable();
> ^^^
> We do not require hard_irq_disable() here?
Between kvmppc_core_prepare_to_enter() and kvmppc_prepare_to_enter(), as I
recall Scott had some discussions with Ben earlier.
Tiejun
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 5:31 ` tiejun.chen
0 siblings, 0 replies; 38+ messages in thread
From: tiejun.chen @ 2013-05-10 5:31 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
linuxppc-dev, kvm, kvm-ppc
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 4e05f8c..f8659aa 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -64,12 +64,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> {
>> int r = 1;
>>
>> - WARN_ON_ONCE(!irqs_disabled());
>> + WARN_ON(irqs_disabled());
>> + hard_irq_disable();
>
> Here we hard disable in kvmppc_prepare_to_enter(), so my comment in other patch about interrupt loss is no more valid.
>
> So here
> MSR.EE = 0
> local_paca->soft_enabled = 0
> local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>
>> +
>> while (true) {
>> if (need_resched()) {
>> local_irq_enable();
>
> This will make the state:
> MSR.EE = 1
> local_paca->soft_enabled = 1
> local_paca->irq_happened = PACA_IRQ_HARD_DIS; //same as before
Why is this same the above state? local_irq_enable() can call
__check_irq_replay() to clear PACA_IRQ_HARD_DIS.
>
> Is that a valid state where interrupts are fully enabled and irq_happend in not 0?
>
>> cond_resched();
>> - local_irq_disable();
>> + hard_irq_disable();
>> continue;
>> }
>>
>> @@ -95,7 +97,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> local_irq_enable();
>> trace_kvm_check_requests(vcpu);
>> r = kvmppc_core_check_requests(vcpu);
>> - local_irq_disable();
>> + hard_irq_disable();
>> if (r > 0)
>> continue;
>> break;
>> @@ -108,21 +110,14 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> }
>>
>> #ifdef CONFIG_PPC64
>> - /* lazy EE magic */
>> - hard_irq_disable();
>> - if (lazy_irq_pending()) {
>> - /* Got an interrupt in between, try again */
>> - local_irq_enable();
>> - local_irq_disable();
>> - kvm_guest_exit();
>> - continue;
>> - }
>> + WARN_ON(lazy_irq_pending());
>> #endif
>>
>> kvm_guest_enter();
>> - break;
>> + return r;
>> }
>>
>> + local_irq_enable();
>> return r;
>> }
>
>
> int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r = 0;
> WARN_ON_ONCE(!irqs_disabled());
>
> kvmppc_core_check_exceptions(vcpu);
>
> if (vcpu->requests) {
> /* Exception delivery raised request; start over */
> return 1;
> }
>
> if (vcpu->arch.shared->msr & MSR_WE) {
> local_irq_enable();
> kvm_vcpu_block(vcpu);
> clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
> local_irq_disable();
> ^^^
> We do not require hard_irq_disable() here?
Between kvmppc_core_prepare_to_enter() and kvmppc_prepare_to_enter(), as I
recall Scott had some discussions with Ben earlier.
Tiejun
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
2013-05-10 3:09 ` Scott Wood
(?)
@ 2013-05-10 7:00 ` Benjamin Herrenschmidt
-1 siblings, 0 replies; 38+ messages in thread
From: Benjamin Herrenschmidt @ 2013-05-10 7:00 UTC (permalink / raw)
To: Scott Wood; +Cc: Alexander Graf, kvm-ppc, kvm, linuxppc-dev
On Thu, 2013-05-09 at 22:09 -0500, Scott Wood wrote:
> lockdep.c has this:
> /*
> * So we're supposed to get called after you mask local IRQs,
> * but for some reason the hardware doesn't quite think you did
> * a proper job.
> */
> if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
> return;
>
> Since irqs_disabled() is based on soft_enabled(), that (not just the
> hard EE bit) needs to be 0 before we call trace_hardirqs_off.
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
Oops ;-)
I'll apply that to my tree and will send it to Linus right after -rc1,
the rest will go the normal way for KVM patches.
Cheers,
Ben.
> ---
> arch/powerpc/include/asm/hw_irq.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index d615b28..ba713f1 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -96,11 +96,12 @@ static inline bool arch_irqs_disabled(void)
> #endif
>
> #define hard_irq_disable() do { \
> + u8 _was_enabled = get_paca()->soft_enabled; \
> __hard_irq_disable(); \
> - if (local_paca->soft_enabled) \
> - trace_hardirqs_off(); \
> get_paca()->soft_enabled = 0; \
> get_paca()->irq_happened |= PACA_IRQ_HARD_DIS; \
> + if (_was_enabled) \
> + trace_hardirqs_off(); \
> } while(0)
>
> static inline bool lazy_irq_pending(void)
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
@ 2013-05-10 7:00 ` Benjamin Herrenschmidt
0 siblings, 0 replies; 38+ messages in thread
From: Benjamin Herrenschmidt @ 2013-05-10 7:00 UTC (permalink / raw)
To: Scott Wood; +Cc: linuxppc-dev, Alexander Graf, kvm-ppc, kvm
On Thu, 2013-05-09 at 22:09 -0500, Scott Wood wrote:
> lockdep.c has this:
> /*
> * So we're supposed to get called after you mask local IRQs,
> * but for some reason the hardware doesn't quite think you did
> * a proper job.
> */
> if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
> return;
>
> Since irqs_disabled() is based on soft_enabled(), that (not just the
> hard EE bit) needs to be 0 before we call trace_hardirqs_off.
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
Oops ;-)
I'll apply that to my tree and will send it to Linus right after -rc1,
the rest will go the normal way for KVM patches.
Cheers,
Ben.
> ---
> arch/powerpc/include/asm/hw_irq.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index d615b28..ba713f1 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -96,11 +96,12 @@ static inline bool arch_irqs_disabled(void)
> #endif
>
> #define hard_irq_disable() do { \
> + u8 _was_enabled = get_paca()->soft_enabled; \
> __hard_irq_disable(); \
> - if (local_paca->soft_enabled) \
> - trace_hardirqs_off(); \
> get_paca()->soft_enabled = 0; \
> get_paca()->irq_happened |= PACA_IRQ_HARD_DIS; \
> + if (_was_enabled) \
> + trace_hardirqs_off(); \
> } while(0)
>
> static inline bool lazy_irq_pending(void)
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling
@ 2013-05-10 7:00 ` Benjamin Herrenschmidt
0 siblings, 0 replies; 38+ messages in thread
From: Benjamin Herrenschmidt @ 2013-05-10 7:00 UTC (permalink / raw)
To: Scott Wood; +Cc: Alexander Graf, kvm-ppc, kvm, linuxppc-dev
On Thu, 2013-05-09 at 22:09 -0500, Scott Wood wrote:
> lockdep.c has this:
> /*
> * So we're supposed to get called after you mask local IRQs,
> * but for some reason the hardware doesn't quite think you did
> * a proper job.
> */
> if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
> return;
>
> Since irqs_disabled() is based on soft_enabled(), that (not just the
> hard EE bit) needs to be 0 before we call trace_hardirqs_off.
>
> Signed-off-by: Scott Wood <scottwood@freescale.com>
Oops ;-)
I'll apply that to my tree and will send it to Linus right after -rc1,
the rest will go the normal way for KVM patches.
Cheers,
Ben.
> ---
> arch/powerpc/include/asm/hw_irq.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index d615b28..ba713f1 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -96,11 +96,12 @@ static inline bool arch_irqs_disabled(void)
> #endif
>
> #define hard_irq_disable() do { \
> + u8 _was_enabled = get_paca()->soft_enabled; \
> __hard_irq_disable(); \
> - if (local_paca->soft_enabled) \
> - trace_hardirqs_off(); \
> get_paca()->soft_enabled = 0; \
> get_paca()->irq_happened |= PACA_IRQ_HARD_DIS; \
> + if (_was_enabled) \
> + trace_hardirqs_off(); \
> } while(0)
>
> static inline bool lazy_irq_pending(void)
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
2013-05-10 5:01 ` Bhushan Bharat-R65777
(?)
@ 2013-05-10 22:43 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:43 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
kvm-ppc, kvm, linuxppc-dev, Wood Scott-B07421
On 05/10/2013 12:01:19 AM, Bhushan Bharat-R65777 wrote:
>
>
> > -----Original Message-----
> > From: kvm-ppc-owner@vger.kernel.org
> [mailto:kvm-ppc-owner@vger.kernel.org] On
> > Behalf Of Scott Wood
> > Sent: Friday, May 10, 2013 8:40 AM
> > To: Alexander Graf; Benjamin Herrenschmidt
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org;
> > Wood Scott-B07421
> > Subject: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in
> > kvmppc_handle_exit()
> >
> > EE is hard-disabled on entry to kvmppc_handle_exit(), so call
> > hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and
> soft_enabled
> > is unset.
> >
> > Without this, we get warnings such as
> arch/powerpc/kernel/time.c:300,
> > and sometimes host kernel hangs.
> >
> > Signed-off-by: Scott Wood <scottwood@freescale.com>
> > ---
> > arch/powerpc/kvm/booke.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> > index 1020119..705fc5c 100644
> > --- a/arch/powerpc/kvm/booke.c
> > +++ b/arch/powerpc/kvm/booke.c
> > @@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run,
> struct kvm_vcpu
> > *vcpu,
> > int r = RESUME_HOST;
> > int s;
> >
> > +#ifdef CONFIG_PPC64
> > + WARN_ON(local_paca->irq_happened != 0);
> > +#endif
> > + hard_irq_disable();
>
> It is not actually to hard disable as EE is already clear but to make
> it looks like hard_disable to host. Right?
> If so, should we write a comment here on why we are doing this?
Yes, I can add a comment.
-Scott
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
@ 2013-05-10 22:43 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:43 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc, linuxppc-dev
On 05/10/2013 12:01:19 AM, Bhushan Bharat-R65777 wrote:
>=20
>=20
> > -----Original Message-----
> > From: kvm-ppc-owner@vger.kernel.org =20
> [mailto:kvm-ppc-owner@vger.kernel.org] On
> > Behalf Of Scott Wood
> > Sent: Friday, May 10, 2013 8:40 AM
> > To: Alexander Graf; Benjamin Herrenschmidt
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; =20
> linuxppc-dev@lists.ozlabs.org;
> > Wood Scott-B07421
> > Subject: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in
> > kvmppc_handle_exit()
> >
> > EE is hard-disabled on entry to kvmppc_handle_exit(), so call
> > hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and =20
> soft_enabled
> > is unset.
> >
> > Without this, we get warnings such as =20
> arch/powerpc/kernel/time.c:300,
> > and sometimes host kernel hangs.
> >
> > Signed-off-by: Scott Wood <scottwood@freescale.com>
> > ---
> > arch/powerpc/kvm/booke.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> > index 1020119..705fc5c 100644
> > --- a/arch/powerpc/kvm/booke.c
> > +++ b/arch/powerpc/kvm/booke.c
> > @@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run, =20
> struct kvm_vcpu
> > *vcpu,
> > int r =3D RESUME_HOST;
> > int s;
> >
> > +#ifdef CONFIG_PPC64
> > + WARN_ON(local_paca->irq_happened !=3D 0);
> > +#endif
> > + hard_irq_disable();
>=20
> It is not actually to hard disable as EE is already clear but to make =20
> it looks like hard_disable to host. Right?
> If so, should we write a comment here on why we are doing this?
Yes, I can add a comment.
-Scott=
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit()
@ 2013-05-10 22:43 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:43 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
kvm-ppc, kvm, linuxppc-dev, Wood Scott-B07421
On 05/10/2013 12:01:19 AM, Bhushan Bharat-R65777 wrote:
>
>
> > -----Original Message-----
> > From: kvm-ppc-owner@vger.kernel.org
> [mailto:kvm-ppc-owner@vger.kernel.org] On
> > Behalf Of Scott Wood
> > Sent: Friday, May 10, 2013 8:40 AM
> > To: Alexander Graf; Benjamin Herrenschmidt
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org;
> > Wood Scott-B07421
> > Subject: [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in
> > kvmppc_handle_exit()
> >
> > EE is hard-disabled on entry to kvmppc_handle_exit(), so call
> > hard_irq_disable() so that PACA_IRQ_HARD_DIS is set, and
> soft_enabled
> > is unset.
> >
> > Without this, we get warnings such as
> arch/powerpc/kernel/time.c:300,
> > and sometimes host kernel hangs.
> >
> > Signed-off-by: Scott Wood <scottwood@freescale.com>
> > ---
> > arch/powerpc/kvm/booke.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> > index 1020119..705fc5c 100644
> > --- a/arch/powerpc/kvm/booke.c
> > +++ b/arch/powerpc/kvm/booke.c
> > @@ -833,6 +833,11 @@ int kvmppc_handle_exit(struct kvm_run *run,
> struct kvm_vcpu
> > *vcpu,
> > int r = RESUME_HOST;
> > int s;
> >
> > +#ifdef CONFIG_PPC64
> > + WARN_ON(local_paca->irq_happened != 0);
> > +#endif
> > + hard_irq_disable();
>
> It is not actually to hard disable as EE is already clear but to make
> it looks like hard_disable to host. Right?
> If so, should we write a comment here on why we are doing this?
Yes, I can add a comment.
-Scott
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
2013-05-10 4:40 ` tiejun.chen
@ 2013-05-10 22:47 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:47 UTC (permalink / raw)
To: tiejun.chen
Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc,
Bhushan Bharat-R65777, linuxppc-dev
On 05/09/2013 11:40:08 PM, tiejun.chen wrote:
> On 05/10/2013 11:34 AM, Bhushan Bharat-R65777 wrote:
>>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>>> index 705fc5c..eb89b83 100644
>>> --- a/arch/powerpc/kvm/booke.c
>>> +++ b/arch/powerpc/kvm/booke.c
>>> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
>>> struct kvm_vcpu
>>> *vcpu)
>>> ret = s;
>>> goto out;
>>> }
>>> - kvmppc_lazy_ee_enable();
>>> + kvmppc_fix_ee_before_entry();
>>
>> local_irq_disable() is called before kvmppc_prepare_to_enter().
>
> In patch 4, we call hard_irq_disable() once enter
> kvmppc_prepare_to_enter().
And before patch 4, we have the code near the end of
kvmppc_prepare_to_enter() that checks lazy_irq_pending() and aborts
guest entry if there was a race. If I'd known about that bit of code
beforehand, I probably wouldn't have bothered with most of patch 4/4,
but now that it's been done it seems cleaner.
-Scott
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry
@ 2013-05-10 22:47 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:47 UTC (permalink / raw)
To: tiejun.chen
Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc,
Bhushan Bharat-R65777, linuxppc-dev
On 05/09/2013 11:40:08 PM, tiejun.chen wrote:
> On 05/10/2013 11:34 AM, Bhushan Bharat-R65777 wrote:
>>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>>> index 705fc5c..eb89b83 100644
>>> --- a/arch/powerpc/kvm/booke.c
>>> +++ b/arch/powerpc/kvm/booke.c
>>> @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, =20
>>> struct kvm_vcpu
>>> *vcpu)
>>> ret =3D s;
>>> goto out;
>>> }
>>> - kvmppc_lazy_ee_enable();
>>> + kvmppc_fix_ee_before_entry();
>>=20
>> local_irq_disable() is called before kvmppc_prepare_to_enter().
>=20
> In patch 4, we call hard_irq_disable() once enter =20
> kvmppc_prepare_to_enter().
And before patch 4, we have the code near the end of =20
kvmppc_prepare_to_enter() that checks lazy_irq_pending() and aborts =20
guest entry if there was a race. If I'd known about that bit of code =20
beforehand, I probably wouldn't have bothered with most of patch 4/4, =20
but now that it's been done it seems cleaner.
-Scott=
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
2013-05-10 5:01 ` Bhushan Bharat-R65777
(?)
@ 2013-05-10 22:53 ` Scott Wood
-1 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:53 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
kvm-ppc, kvm, linuxppc-dev, Wood Scott-B07421
On 05/10/2013 12:01:38 AM, Bhushan Bharat-R65777 wrote:
>
>
> > -----Original Message-----
> > From: kvm-ppc-owner@vger.kernel.org
> [mailto:kvm-ppc-owner@vger.kernel.org] On
> > Behalf Of Scott Wood
> > Sent: Friday, May 10, 2013 8:40 AM
> > To: Alexander Graf; Benjamin Herrenschmidt
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org;
> > Wood Scott-B07421
> > Subject: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
> >
> > - WARN_ON_ONCE(!irqs_disabled());
> > + WARN_ON(irqs_disabled());
> > + hard_irq_disable();
>
> Here we hard disable in kvmppc_prepare_to_enter(), so my comment in
> other patch about interrupt loss is no more valid.
>
> So here
> MSR.EE = 0
> local_paca->soft_enabled = 0
> local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>
> > +
> > while (true) {
> > if (need_resched()) {
> > local_irq_enable();
>
> This will make the state:
> MSR.EE = 1
> local_paca->soft_enabled = 1
> local_paca->irq_happened = PACA_IRQ_HARD_DIS; //same as before
>
> Is that a valid state where interrupts are fully enabled and
> irq_happend in not 0?
PACA_IRQ_HARD_DIS will have been cleared by local_irq_enable(), as
Tiejun pointed out.
> int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r = 0;
> WARN_ON_ONCE(!irqs_disabled());
>
> kvmppc_core_check_exceptions(vcpu);
>
> if (vcpu->requests) {
> /* Exception delivery raised request; start over */
> return 1;
> }
>
> if (vcpu->arch.shared->msr & MSR_WE) {
> local_irq_enable();
> kvm_vcpu_block(vcpu);
> clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
> local_irq_disable();
> ^^^
> We do not require hard_irq_disable() here?
Yes, that should be changed to hard_irq_disable(), and I'll add a
WARN_ON to double check that interrupts are hard-disabled (eventually
we'll probably want to make these critical-path assertions dependent on
a debug option...). It doesn't really matter all that much, though,
since we don't have MSR_WE on any 64-bit booke chips. :-)
-Scott
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 22:53 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:53 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc, linuxppc-dev
On 05/10/2013 12:01:38 AM, Bhushan Bharat-R65777 wrote:
>=20
>=20
> > -----Original Message-----
> > From: kvm-ppc-owner@vger.kernel.org =20
> [mailto:kvm-ppc-owner@vger.kernel.org] On
> > Behalf Of Scott Wood
> > Sent: Friday, May 10, 2013 8:40 AM
> > To: Alexander Graf; Benjamin Herrenschmidt
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org; =20
> linuxppc-dev@lists.ozlabs.org;
> > Wood Scott-B07421
> > Subject: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
> >
> > - WARN_ON_ONCE(!irqs_disabled());
> > + WARN_ON(irqs_disabled());
> > + hard_irq_disable();
>=20
> Here we hard disable in kvmppc_prepare_to_enter(), so my comment in =20
> other patch about interrupt loss is no more valid.
>=20
> So here
> MSR.EE =3D 0
> local_paca->soft_enabled =3D 0
> local_paca->irq_happened |=3D PACA_IRQ_HARD_DIS;
>=20
> > +
> > while (true) {
> > if (need_resched()) {
> > local_irq_enable();
>=20
> This will make the state:
> MSR.EE =3D 1
> local_paca->soft_enabled =3D 1
> local_paca->irq_happened =3D PACA_IRQ_HARD_DIS; //same as before
>=20
> Is that a valid state where interrupts are fully enabled and =20
> irq_happend in not 0?
PACA_IRQ_HARD_DIS will have been cleared by local_irq_enable(), as =20
Tiejun pointed out.
> int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r =3D 0;
> WARN_ON_ONCE(!irqs_disabled());
>=20
> kvmppc_core_check_exceptions(vcpu);
>=20
> if (vcpu->requests) {
> /* Exception delivery raised request; start over */
> return 1;
> }
>=20
> if (vcpu->arch.shared->msr & MSR_WE) {
> local_irq_enable();
> kvm_vcpu_block(vcpu);
> clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
> local_irq_disable();
> ^^^
> We do not require hard_irq_disable() here?
Yes, that should be changed to hard_irq_disable(), and I'll add a =20
WARN_ON to double check that interrupts are hard-disabled (eventually =20
we'll probably want to make these critical-path assertions dependent on =20
a debug option...). It doesn't really matter all that much, though, =20
since we don't have MSR_WE on any 64-bit booke chips. :-)
-Scott=
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
@ 2013-05-10 22:53 ` Scott Wood
0 siblings, 0 replies; 38+ messages in thread
From: Scott Wood @ 2013-05-10 22:53 UTC (permalink / raw)
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421, Alexander Graf, Benjamin Herrenschmidt,
kvm-ppc, kvm, linuxppc-dev, Wood Scott-B07421
On 05/10/2013 12:01:38 AM, Bhushan Bharat-R65777 wrote:
>
>
> > -----Original Message-----
> > From: kvm-ppc-owner@vger.kernel.org
> [mailto:kvm-ppc-owner@vger.kernel.org] On
> > Behalf Of Scott Wood
> > Sent: Friday, May 10, 2013 8:40 AM
> > To: Alexander Graf; Benjamin Herrenschmidt
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org;
> > Wood Scott-B07421
> > Subject: [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup
> >
> > - WARN_ON_ONCE(!irqs_disabled());
> > + WARN_ON(irqs_disabled());
> > + hard_irq_disable();
>
> Here we hard disable in kvmppc_prepare_to_enter(), so my comment in
> other patch about interrupt loss is no more valid.
>
> So here
> MSR.EE = 0
> local_paca->soft_enabled = 0
> local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>
> > +
> > while (true) {
> > if (need_resched()) {
> > local_irq_enable();
>
> This will make the state:
> MSR.EE = 1
> local_paca->soft_enabled = 1
> local_paca->irq_happened = PACA_IRQ_HARD_DIS; //same as before
>
> Is that a valid state where interrupts are fully enabled and
> irq_happend in not 0?
PACA_IRQ_HARD_DIS will have been cleared by local_irq_enable(), as
Tiejun pointed out.
> int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
> {
> int r = 0;
> WARN_ON_ONCE(!irqs_disabled());
>
> kvmppc_core_check_exceptions(vcpu);
>
> if (vcpu->requests) {
> /* Exception delivery raised request; start over */
> return 1;
> }
>
> if (vcpu->arch.shared->msr & MSR_WE) {
> local_irq_enable();
> kvm_vcpu_block(vcpu);
> clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
> local_irq_disable();
> ^^^
> We do not require hard_irq_disable() here?
Yes, that should be changed to hard_irq_disable(), and I'll add a
WARN_ON to double check that interrupts are hard-disabled (eventually
we'll probably want to make these critical-path assertions dependent on
a debug option...). It doesn't really matter all that much, though,
since we don't have MSR_WE on any 64-bit booke chips. :-)
-Scott
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2013-05-10 22:53 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-10 3:09 [PATCH v2 0/4] kvm/ppc: interrupt disabling fixes Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:09 ` [PATCH v2 1/4] powerpc: hard_irq_disable(): Call trace_hardirqs_off after disabling Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 7:00 ` Benjamin Herrenschmidt
2013-05-10 7:00 ` Benjamin Herrenschmidt
2013-05-10 7:00 ` Benjamin Herrenschmidt
2013-05-10 3:09 ` [PATCH v2 2/4] kvm/ppc/booke64: Fix lazy ee handling in kvmppc_handle_exit() Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 5:01 ` Bhushan Bharat-R65777
2013-05-10 5:01 ` Bhushan Bharat-R65777
2013-05-10 22:43 ` Scott Wood
2013-05-10 22:43 ` Scott Wood
2013-05-10 22:43 ` Scott Wood
2013-05-10 3:09 ` [PATCH v2 3/4] kvm/ppc: Call trace_hardirqs_on before entry Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:34 ` Bhushan Bharat-R65777
2013-05-10 3:34 ` Bhushan Bharat-R65777
2013-05-10 4:40 ` tiejun.chen
2013-05-10 4:40 ` tiejun.chen
2013-05-10 4:40 ` tiejun.chen
2013-05-10 22:47 ` Scott Wood
2013-05-10 22:47 ` Scott Wood
2013-05-10 3:09 ` [PATCH v2 4/4] kvm/ppc: IRQ disabling cleanup Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 3:09 ` Scott Wood
2013-05-10 5:01 ` Bhushan Bharat-R65777
2013-05-10 5:01 ` Bhushan Bharat-R65777
2013-05-10 5:31 ` tiejun.chen
2013-05-10 5:31 ` tiejun.chen
2013-05-10 5:31 ` tiejun.chen
2013-05-10 22:53 ` Scott Wood
2013-05-10 22:53 ` Scott Wood
2013-05-10 22:53 ` Scott Wood
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.