linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: Make kvm_lock non-raw
@ 2013-09-16 14:06 Paolo Bonzini
  2013-09-16 14:06 ` [PATCH 1/3] KVM: cleanup (physical) CPU hotplug Paolo Bonzini
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-16 14:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul Gortmaker, kvm, gleb, jan.kiszka

Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
patch that shrunk the kvm_lock critical section so that the mmu_lock
critical section does not nest with it, but in the end there is no reason
for the vm_list to be protected by a raw spinlock.  Only manipulations
of kvm_usage_count and the consequent hardware_enable/disable operations
are not preemptable.

This small series thus splits the kvm_lock in the "raw" part and the
"non-raw" part.

Paul, could you please provide your Tested-by?

Thanks,

Paolo

Paolo Bonzini (3):
  KVM: cleanup (physical) CPU hotplug
  KVM: protect kvm_usage_count with its own spinlock
  KVM: Convert kvm_lock back to non-raw spinlock

 Documentation/virtual/kvm/locking.txt |  8 ++++--
 arch/x86/kvm/mmu.c                    |  4 +--
 arch/x86/kvm/x86.c                    |  8 +++---
 include/linux/kvm_host.h              |  2 +-
 virt/kvm/kvm_main.c                   | 51 ++++++++++++++++++-----------------
 5 files changed, 40 insertions(+), 33 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/3] KVM: cleanup (physical) CPU hotplug
  2013-09-16 14:06 [PATCH 0/3] KVM: Make kvm_lock non-raw Paolo Bonzini
@ 2013-09-16 14:06 ` Paolo Bonzini
  2013-09-17  7:57   ` Jan Kiszka
  2013-09-16 14:06 ` [PATCH 2/3] KVM: protect kvm_usage_count with its own spinlock Paolo Bonzini
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-16 14:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul Gortmaker, stable, kvm, gleb, jan.kiszka

Remove the useless argument, and do not do anything if there are no
VMs running at the time of the hotplug.

Cc: stable@vger.kernel.org
Cc: kvm@vger.kernel.org
Cc: gleb@redhat.com
Cc: jan.kiszka@siemens.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 virt/kvm/kvm_main.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 979bff4..75522b3 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2681,10 +2681,11 @@ static void hardware_enable_nolock(void *junk)
 	}
 }
 
-static void hardware_enable(void *junk)
+static void hardware_enable(void)
 {
 	raw_spin_lock(&kvm_lock);
-	hardware_enable_nolock(junk);
+	if (kvm_usage_count)
+		hardware_enable_nolock(NULL);
 	raw_spin_unlock(&kvm_lock);
 }
 
@@ -2698,10 +2699,11 @@ static void hardware_disable_nolock(void *junk)
 	kvm_arch_hardware_disable(NULL);
 }
 
-static void hardware_disable(void *junk)
+static void hardware_disable(void)
 {
 	raw_spin_lock(&kvm_lock);
-	hardware_disable_nolock(junk);
+	if (kvm_usage_count)
+		hardware_disable_nolock(NULL);
 	raw_spin_unlock(&kvm_lock);
 }
 
@@ -2756,12 +2758,12 @@ static int kvm_cpu_hotplug(struct notifier_block *notifier, unsigned long val,
 	case CPU_DYING:
 		printk(KERN_INFO "kvm: disabling virtualization on CPU%d\n",
 		       cpu);
-		hardware_disable(NULL);
+		hardware_disable();
 		break;
 	case CPU_STARTING:
 		printk(KERN_INFO "kvm: enabling virtualization on CPU%d\n",
 		       cpu);
-		hardware_enable(NULL);
+		hardware_enable();
 		break;
 	}
 	return NOTIFY_OK;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/3] KVM: protect kvm_usage_count with its own spinlock
  2013-09-16 14:06 [PATCH 0/3] KVM: Make kvm_lock non-raw Paolo Bonzini
  2013-09-16 14:06 ` [PATCH 1/3] KVM: cleanup (physical) CPU hotplug Paolo Bonzini
@ 2013-09-16 14:06 ` Paolo Bonzini
  2013-09-16 14:06 ` [PATCH 3/3] KVM: Convert kvm_lock back to non-raw spinlock Paolo Bonzini
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-16 14:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul Gortmaker, stable, kvm, gleb, jan.kiszka

The VM list need not be protected by a raw spinlock.  Separate the
two so that kvm_lock can be made non-raw.

Cc: stable@vger.kernel.org
Cc: kvm@vger.kernel.org
Cc: gleb@redhat.com
Cc: jan.kiszka@siemens.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virtual/kvm/locking.txt |  6 +++++-
 virt/kvm/kvm_main.c                   | 19 ++++++++++---------
 2 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/Documentation/virtual/kvm/locking.txt b/Documentation/virtual/kvm/locking.txt
index a9f366e..ba9e1c2 100644
--- a/Documentation/virtual/kvm/locking.txt
+++ b/Documentation/virtual/kvm/locking.txt
@@ -135,7 +135,11 @@ Name:		kvm_lock
 Type:		raw_spinlock
 Arch:		any
 Protects:	- vm_list
-		- hardware virtualization enable/disable
+
+Name:		kvm_count_lock
+Type:		raw_spinlock_t
+Arch:		any
+Protects:	- hardware virtualization enable/disable
 Comment:	'raw' because hardware enabling/disabling must be atomic /wrt
 		migration.
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 75522b3..da13379 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -71,6 +71,7 @@ MODULE_LICENSE("GPL");
  */
 
 DEFINE_RAW_SPINLOCK(kvm_lock);
+static DEFINE_RAW_SPINLOCK(kvm_count_lock);
 LIST_HEAD(vm_list);
 
 static cpumask_var_t cpus_hardware_enabled;
@@ -2683,10 +2684,10 @@ static void hardware_enable_nolock(void *junk)
 
 static void hardware_enable(void)
 {
-	raw_spin_lock(&kvm_lock);
+	raw_spin_lock(&kvm_count_lock);
 	if (kvm_usage_count)
 		hardware_enable_nolock(NULL);
-	raw_spin_unlock(&kvm_lock);
+	raw_spin_unlock(&kvm_count_lock);
 }
 
 static void hardware_disable_nolock(void *junk)
@@ -2701,10 +2702,10 @@ static void hardware_disable_nolock(void *junk)
 
 static void hardware_disable(void)
 {
-	raw_spin_lock(&kvm_lock);
+	raw_spin_lock(&kvm_count_lock);
 	if (kvm_usage_count)
 		hardware_disable_nolock(NULL);
-	raw_spin_unlock(&kvm_lock);
+	raw_spin_unlock(&kvm_count_lock);
 }
 
 static void hardware_disable_all_nolock(void)
@@ -2718,16 +2719,16 @@ static void hardware_disable_all_nolock(void)
 
 static void hardware_disable_all(void)
 {
-	raw_spin_lock(&kvm_lock);
+	raw_spin_lock(&kvm_count_lock);
 	hardware_disable_all_nolock();
-	raw_spin_unlock(&kvm_lock);
+	raw_spin_unlock(&kvm_count_lock);
 }
 
 static int hardware_enable_all(void)
 {
 	int r = 0;
 
-	raw_spin_lock(&kvm_lock);
+	raw_spin_lock(&kvm_count_lock);
 
 	kvm_usage_count++;
 	if (kvm_usage_count == 1) {
@@ -2740,7 +2741,7 @@ static int hardware_enable_all(void)
 		}
 	}
 
-	raw_spin_unlock(&kvm_lock);
+	raw_spin_unlock(&kvm_count_lock);
 
 	return r;
 }
@@ -3133,7 +3134,7 @@ static int kvm_suspend(void)
 static void kvm_resume(void)
 {
 	if (kvm_usage_count) {
-		WARN_ON(raw_spin_is_locked(&kvm_lock));
+		WARN_ON(raw_spin_is_locked(&kvm_count_lock));
 		hardware_enable_nolock(NULL);
 	}
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/3] KVM: Convert kvm_lock back to non-raw spinlock
  2013-09-16 14:06 [PATCH 0/3] KVM: Make kvm_lock non-raw Paolo Bonzini
  2013-09-16 14:06 ` [PATCH 1/3] KVM: cleanup (physical) CPU hotplug Paolo Bonzini
  2013-09-16 14:06 ` [PATCH 2/3] KVM: protect kvm_usage_count with its own spinlock Paolo Bonzini
@ 2013-09-16 14:06 ` Paolo Bonzini
  2013-09-16 22:12 ` [PATCH 0/3] KVM: Make kvm_lock non-raw Paul Gortmaker
  2013-09-22  7:42 ` Gleb Natapov
  4 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-16 14:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul Gortmaker, stable, kvm, gleb, jan.kiszka

In commit e935b8372cf8 ("KVM: Convert kvm_lock to raw_spinlock"),
the kvm_lock was made a raw lock.  However, the kvm mmu_shrink()
function tries to grab the (non-raw) mmu_lock within the scope of
the raw locked kvm_lock being held.  This leads to the following:

BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
in_atomic(): 1, irqs_disabled(): 0, pid: 55, name: kswapd0
Preemption disabled at:[<ffffffffa0376eac>] mmu_shrink+0x5c/0x1b0 [kvm]

Pid: 55, comm: kswapd0 Not tainted 3.4.34_preempt-rt
Call Trace:
 [<ffffffff8106f2ad>] __might_sleep+0xfd/0x160
 [<ffffffff817d8d64>] rt_spin_lock+0x24/0x50
 [<ffffffffa0376f3c>] mmu_shrink+0xec/0x1b0 [kvm]
 [<ffffffff8111455d>] shrink_slab+0x17d/0x3a0
 [<ffffffff81151f00>] ? mem_cgroup_iter+0x130/0x260
 [<ffffffff8111824a>] balance_pgdat+0x54a/0x730
 [<ffffffff8111fe47>] ? set_pgdat_percpu_threshold+0xa7/0xd0
 [<ffffffff811185bf>] kswapd+0x18f/0x490
 [<ffffffff81070961>] ? get_parent_ip+0x11/0x50
 [<ffffffff81061970>] ? __init_waitqueue_head+0x50/0x50
 [<ffffffff81118430>] ? balance_pgdat+0x730/0x730
 [<ffffffff81060d2b>] kthread+0xdb/0xe0
 [<ffffffff8106e122>] ? finish_task_switch+0x52/0x100
 [<ffffffff817e1e94>] kernel_thread_helper+0x4/0x10
 [<ffffffff81060c50>] ? __init_kthread_worker+0x

After the previous patch, kvm_lock need not be a raw spinlock anymore,
so change it back.

Reported-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: stable@vger.kernel.org
Cc: kvm@vger.kernel.org
Cc: gleb@redhat.com
Cc: jan.kiszka@siemens.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virtual/kvm/locking.txt |  2 +-
 arch/x86/kvm/mmu.c                    |  4 ++--
 arch/x86/kvm/x86.c                    |  8 ++++----
 include/linux/kvm_host.h              |  2 +-
 virt/kvm/kvm_main.c                   | 18 +++++++++---------
 5 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/Documentation/virtual/kvm/locking.txt b/Documentation/virtual/kvm/locking.txt
index ba9e1c2..f886941 100644
--- a/Documentation/virtual/kvm/locking.txt
+++ b/Documentation/virtual/kvm/locking.txt
@@ -132,7 +132,7 @@ See the comments in spte_has_volatile_bits() and mmu_spte_update().
 ------------
 
 Name:		kvm_lock
-Type:		raw_spinlock
+Type:		spinlock_t
 Arch:		any
 Protects:	- vm_list
 
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6e2d2c8..d027a72 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4429,7 +4429,7 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
 	if (nr_to_scan == 0)
 		goto out;
 
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 
 	list_for_each_entry(kvm, &vm_list, vm_list) {
 		int idx;
@@ -4473,7 +4473,7 @@ unlock:
 		break;
 	}
 
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 
 out:
 	return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7c36099..3acd631 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5273,7 +5273,7 @@ static int kvmclock_cpufreq_notifier(struct notifier_block *nb, unsigned long va
 
 	smp_call_function_single(freq->cpu, tsc_khz_changed, freq, 1);
 
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 	list_for_each_entry(kvm, &vm_list, vm_list) {
 		kvm_for_each_vcpu(i, vcpu, kvm) {
 			if (vcpu->cpu != freq->cpu)
@@ -5283,7 +5283,7 @@ static int kvmclock_cpufreq_notifier(struct notifier_block *nb, unsigned long va
 				send_ipi = 1;
 		}
 	}
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 
 	if (freq->old < freq->new && send_ipi) {
 		/*
@@ -5436,12 +5436,12 @@ static void pvclock_gtod_update_fn(struct work_struct *work)
 	struct kvm_vcpu *vcpu;
 	int i;
 
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 	list_for_each_entry(kvm, &vm_list, vm_list)
 		kvm_for_each_vcpu(i, vcpu, kvm)
 			set_bit(KVM_REQ_MASTERCLOCK_UPDATE, &vcpu->requests);
 	atomic_set(&kvm_guest_has_master_clock, 0);
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 }
 
 static DECLARE_WORK(pvclock_gtod_work, pvclock_gtod_update_fn);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 749bdb1..7c961e1 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -142,7 +142,7 @@ struct kvm;
 struct kvm_vcpu;
 extern struct kmem_cache *kvm_vcpu_cache;
 
-extern raw_spinlock_t kvm_lock;
+extern spinlock_t kvm_lock;
 extern struct list_head vm_list;
 
 struct kvm_io_range {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index da13379..3397a9c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -70,7 +70,7 @@ MODULE_LICENSE("GPL");
  * 		kvm->lock --> kvm->slots_lock --> kvm->irq_lock
  */
 
-DEFINE_RAW_SPINLOCK(kvm_lock);
+DEFINE_SPINLOCK(kvm_lock);
 static DEFINE_RAW_SPINLOCK(kvm_count_lock);
 LIST_HEAD(vm_list);
 
@@ -491,9 +491,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	if (r)
 		goto out_err;
 
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 	list_add(&kvm->vm_list, &vm_list);
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 
 	return kvm;
 
@@ -582,9 +582,9 @@ static void kvm_destroy_vm(struct kvm *kvm)
 	struct mm_struct *mm = kvm->mm;
 
 	kvm_arch_sync_events(kvm);
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 	list_del(&kvm->vm_list);
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 	kvm_free_irq_routing(kvm);
 	for (i = 0; i < KVM_NR_BUSES; i++)
 		kvm_io_bus_destroy(kvm->buses[i]);
@@ -3057,10 +3057,10 @@ static int vm_stat_get(void *_offset, u64 *val)
 	struct kvm *kvm;
 
 	*val = 0;
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 	list_for_each_entry(kvm, &vm_list, vm_list)
 		*val += *(u32 *)((void *)kvm + offset);
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 	return 0;
 }
 
@@ -3074,12 +3074,12 @@ static int vcpu_stat_get(void *_offset, u64 *val)
 	int i;
 
 	*val = 0;
-	raw_spin_lock(&kvm_lock);
+	spin_lock(&kvm_lock);
 	list_for_each_entry(kvm, &vm_list, vm_list)
 		kvm_for_each_vcpu(i, vcpu, kvm)
 			*val += *(u32 *)((void *)vcpu + offset);
 
-	raw_spin_unlock(&kvm_lock);
+	spin_unlock(&kvm_lock);
 	return 0;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-16 14:06 [PATCH 0/3] KVM: Make kvm_lock non-raw Paolo Bonzini
                   ` (2 preceding siblings ...)
  2013-09-16 14:06 ` [PATCH 3/3] KVM: Convert kvm_lock back to non-raw spinlock Paolo Bonzini
@ 2013-09-16 22:12 ` Paul Gortmaker
  2013-09-20 17:51   ` Paul Gortmaker
  2013-09-22  7:42 ` Gleb Natapov
  4 siblings, 1 reply; 20+ messages in thread
From: Paul Gortmaker @ 2013-09-16 22:12 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, gleb, jan.kiszka

On 13-09-16 10:06 AM, Paolo Bonzini wrote:
> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> patch that shrunk the kvm_lock critical section so that the mmu_lock
> critical section does not nest with it, but in the end there is no reason
> for the vm_list to be protected by a raw spinlock.  Only manipulations
> of kvm_usage_count and the consequent hardware_enable/disable operations
> are not preemptable.
> 
> This small series thus splits the kvm_lock in the "raw" part and the
> "non-raw" part.
> 
> Paul, could you please provide your Tested-by?

Sure, I'll go back and see if I can find what triggered it in the
original report, and give the patches a spin on 3.4.x-rt (and probably
3.10.x-rt, since that is where rt-current is presently).

Paul.
--

> 
> Thanks,
> 
> Paolo
> 
> Paolo Bonzini (3):
>   KVM: cleanup (physical) CPU hotplug
>   KVM: protect kvm_usage_count with its own spinlock
>   KVM: Convert kvm_lock back to non-raw spinlock
> 
>  Documentation/virtual/kvm/locking.txt |  8 ++++--
>  arch/x86/kvm/mmu.c                    |  4 +--
>  arch/x86/kvm/x86.c                    |  8 +++---
>  include/linux/kvm_host.h              |  2 +-
>  virt/kvm/kvm_main.c                   | 51 ++++++++++++++++++-----------------
>  5 files changed, 40 insertions(+), 33 deletions(-)
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] KVM: cleanup (physical) CPU hotplug
  2013-09-16 14:06 ` [PATCH 1/3] KVM: cleanup (physical) CPU hotplug Paolo Bonzini
@ 2013-09-17  7:57   ` Jan Kiszka
  2013-09-17 23:19     ` Paolo Bonzini
  0 siblings, 1 reply; 20+ messages in thread
From: Jan Kiszka @ 2013-09-17  7:57 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, Paul Gortmaker, stable, kvm, gleb

On 2013-09-16 16:06, Paolo Bonzini wrote:
> Remove the useless argument, and do not do anything if there are no
> VMs running at the time of the hotplug.

kvm_cpu_hotplug already filters !kvm_usage_count. If we need the check
to be under kvm_lock, drop that line as well. If that is not required
(machine still halted?), drop the related changes here.

Jan

> 
> Cc: stable@vger.kernel.org
> Cc: kvm@vger.kernel.org
> Cc: gleb@redhat.com
> Cc: jan.kiszka@siemens.com
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  virt/kvm/kvm_main.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 979bff4..75522b3 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2681,10 +2681,11 @@ static void hardware_enable_nolock(void *junk)
>  	}
>  }
>  
> -static void hardware_enable(void *junk)
> +static void hardware_enable(void)
>  {
>  	raw_spin_lock(&kvm_lock);
> -	hardware_enable_nolock(junk);
> +	if (kvm_usage_count)
> +		hardware_enable_nolock(NULL);
>  	raw_spin_unlock(&kvm_lock);
>  }
>  
> @@ -2698,10 +2699,11 @@ static void hardware_disable_nolock(void *junk)
>  	kvm_arch_hardware_disable(NULL);
>  }
>  
> -static void hardware_disable(void *junk)
> +static void hardware_disable(void)
>  {
>  	raw_spin_lock(&kvm_lock);
> -	hardware_disable_nolock(junk);
> +	if (kvm_usage_count)
> +		hardware_disable_nolock(NULL);
>  	raw_spin_unlock(&kvm_lock);
>  }
>  
> @@ -2756,12 +2758,12 @@ static int kvm_cpu_hotplug(struct notifier_block *notifier, unsigned long val,
>  	case CPU_DYING:
>  		printk(KERN_INFO "kvm: disabling virtualization on CPU%d\n",
>  		       cpu);
> -		hardware_disable(NULL);
> +		hardware_disable();
>  		break;
>  	case CPU_STARTING:
>  		printk(KERN_INFO "kvm: enabling virtualization on CPU%d\n",
>  		       cpu);
> -		hardware_enable(NULL);
> +		hardware_enable();
>  		break;
>  	}
>  	return NOTIFY_OK;
> 

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] KVM: cleanup (physical) CPU hotplug
  2013-09-17  7:57   ` Jan Kiszka
@ 2013-09-17 23:19     ` Paolo Bonzini
  0 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-17 23:19 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: linux-kernel, Paul Gortmaker, stable, kvm, gleb

Il 17/09/2013 09:57, Jan Kiszka ha scritto:
>> > Remove the useless argument, and do not do anything if there are no
>> > VMs running at the time of the hotplug.
> kvm_cpu_hotplug already filters !kvm_usage_count. If we need the check
> to be under kvm_lock, drop that line as well. If that is not required
> (machine still halted?), drop the related changes here.

CPU_DYING is called under stop_machine, CPU_STARTING is not.  So I
should drop the test in kvm_cpu_hotplug.  It's a bit clearer anyway to
not rely on stop_machine.

Paolo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-16 22:12 ` [PATCH 0/3] KVM: Make kvm_lock non-raw Paul Gortmaker
@ 2013-09-20 17:51   ` Paul Gortmaker
  2013-09-20 18:04     ` Jan Kiszka
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Gortmaker @ 2013-09-20 17:51 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, gleb, jan.kiszka, linux-rt-users

[Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul Gortmaker wrote:

> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
> > Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> > mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> > patch that shrunk the kvm_lock critical section so that the mmu_lock
> > critical section does not nest with it, but in the end there is no reason
> > for the vm_list to be protected by a raw spinlock.  Only manipulations
> > of kvm_usage_count and the consequent hardware_enable/disable operations
> > are not preemptable.
> > 
> > This small series thus splits the kvm_lock in the "raw" part and the
> > "non-raw" part.
> > 
> > Paul, could you please provide your Tested-by?
> 
> Sure, I'll go back and see if I can find what triggered it in the
> original report, and give the patches a spin on 3.4.x-rt (and probably
> 3.10.x-rt, since that is where rt-current is presently).

Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
issues, probably not explicitly related to this patchset (see below).

Paul.
--

e1000e 0000:00:19.0 eth1: removed PHC
assign device 0:0:19.0
pci 0000:00:19.0: irq 43 for MSI/MSI-X
pci 0000:00:19.0: irq 43 for MSI/MSI-X
pci 0000:00:19.0: irq 43 for MSI/MSI-X
pci 0000:00:19.0: irq 43 for MSI/MSI-X
BUG: sleeping function called from invalid context at /home/paul/git/linux-rt/kernel/rtmutex.c:659
in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
2 locks held by swapper/0/0:
 #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
 #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
irq event stamp: 6121390
hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
softirqs last  enabled at (0): [<          (null)>]           (null)
softirqs last disabled at (0): [<          (null)>]           (null)
Preemption disabled at:[<ffffffff810ebb9a>] cpu_startup_entry+0x1ba/0x430

CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
 ffffffff8201c440 ffff880223603cf0 ffffffff819f177d ffff880223603d18
 ffffffff810c90d3 ffff880214a50110 0000000000000001 0000000000000001
 ffff880223603d38 ffffffff819f89a4 ffff880214a50110 ffff880214a50110
Call Trace:
 <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
 [<ffffffff810c90d3>] __might_sleep+0x153/0x250
 [<ffffffff819f89a4>] rt_spin_lock+0x24/0x60
 [<ffffffff810ccdd6>] __wake_up+0x36/0x70
 [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
 [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
 [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
 [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
 [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
 [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
 [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
 [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
 [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
 [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
 [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
 [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
 [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
 [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
 [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
 <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
 [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
 [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
 [<ffffffff819db767>] rest_init+0x137/0x140
 [<ffffffff819db635>] ? rest_init+0x5/0x140
 [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
 [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
 [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
 [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf

=================================
[ INFO: inconsistent lock state ]
3.10.10-rt7 #2 Not tainted
---------------------------------
inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
 (&(&(&q->lock)->lock)->wait_lock){?.+.-.}, at: [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
{HARDIRQ-ON-W} state was registered at:
  [<ffffffff810fc94d>] __lock_acquire+0x69d/0x20e0
  [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
  [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
  [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
  [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
  [<ffffffff8109c5ce>] run_timer_softirq+0x1be/0x390
  [<ffffffff81092a09>] do_current_softirqs+0x239/0x5b0
  [<ffffffff81092db8>] run_ksoftirqd+0x38/0x60
  [<ffffffff810c5d7c>] smpboot_thread_fn+0x22c/0x340
  [<ffffffff810bbf4d>] kthread+0xcd/0xe0
  [<ffffffff81a019dc>] ret_from_fork+0x7c/0xb0
irq event stamp: 6121390
hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
softirqs last  enabled at (0): [<          (null)>]           (null)
softirqs last disabled at (0): [<          (null)>]           (null)

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&(&(&q->lock)->lock)->wait_lock);
  <Interrupt>
    lock(&(&(&q->lock)->lock)->wait_lock);

 *** DEADLOCK ***

2 locks held by swapper/0/0:
 #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
 #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0

stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
 ffffffff8262b550 ffff880223603a40 ffffffff819f177d ffff880223603a90
 ffffffff819ec532 0000000000000000 ffffffff00000000 ffff880200000001
 0000000000000002 ffffffff8201ccc0 ffffffff810f9040 0000000000000000
Call Trace:
 <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
 [<ffffffff819ec532>] print_usage_bug.part.36+0x28b/0x29a
 [<ffffffff810f9040>] ? check_usage_backwards+0x150/0x150
 [<ffffffff810f9dab>] mark_lock+0x28b/0x6a0
 [<ffffffff810fcbf9>] __lock_acquire+0x949/0x20e0
 [<ffffffff811091f2>] ? __module_text_address+0x12/0x60
 [<ffffffff8110ea8f>] ? is_module_text_address+0x2f/0x60
 [<ffffffff810b8408>] ? __kernel_text_address+0x58/0x80
 [<ffffffff8104dbb2>] ? print_context_stack+0x62/0xf0
 [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
 [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
 [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
 [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
 [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
 [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
 [<ffffffff810ccdd6>] __wake_up+0x36/0x70
 [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
 [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
 [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
 [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
 [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
 [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
 [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
 [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
 [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
 [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
 [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
 [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
 [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
 [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
 [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
 <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
 [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
 [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
 [<ffffffff819db767>] rest_init+0x137/0x140
 [<ffffffff819db635>] ? rest_init+0x5/0x140
 [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
 [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
 [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
 [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf

> 
> Paul.
> --
> 
> > 
> > Thanks,
> > 
> > Paolo
> > 
> > Paolo Bonzini (3):
> >   KVM: cleanup (physical) CPU hotplug
> >   KVM: protect kvm_usage_count with its own spinlock
> >   KVM: Convert kvm_lock back to non-raw spinlock
> > 
> >  Documentation/virtual/kvm/locking.txt |  8 ++++--
> >  arch/x86/kvm/mmu.c                    |  4 +--
> >  arch/x86/kvm/x86.c                    |  8 +++---
> >  include/linux/kvm_host.h              |  2 +-
> >  virt/kvm/kvm_main.c                   | 51 ++++++++++++++++++-----------------
> >  5 files changed, 40 insertions(+), 33 deletions(-)
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-20 17:51   ` Paul Gortmaker
@ 2013-09-20 18:04     ` Jan Kiszka
  2013-09-20 18:18       ` Paul Gortmaker
  2013-09-21 20:26       ` Michael S. Tsirkin
  0 siblings, 2 replies; 20+ messages in thread
From: Jan Kiszka @ 2013-09-20 18:04 UTC (permalink / raw)
  To: Paul Gortmaker
  Cc: Paolo Bonzini, linux-kernel, kvm, gleb, linux-rt-users,
	Alex Williamson, Michael S. Tsirkin

On 2013-09-20 19:51, Paul Gortmaker wrote:
> [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul Gortmaker wrote:
> 
>> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>> critical section does not nest with it, but in the end there is no reason
>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>> are not preemptable.
>>>
>>> This small series thus splits the kvm_lock in the "raw" part and the
>>> "non-raw" part.
>>>
>>> Paul, could you please provide your Tested-by?
>>
>> Sure, I'll go back and see if I can find what triggered it in the
>> original report, and give the patches a spin on 3.4.x-rt (and probably
>> 3.10.x-rt, since that is where rt-current is presently).
> 
> Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
> issues, probably not explicitly related to this patchset (see below).
> 
> Paul.
> --
> 
> e1000e 0000:00:19.0 eth1: removed PHC
> assign device 0:0:19.0
> pci 0000:00:19.0: irq 43 for MSI/MSI-X
> pci 0000:00:19.0: irq 43 for MSI/MSI-X
> pci 0000:00:19.0: irq 43 for MSI/MSI-X
> pci 0000:00:19.0: irq 43 for MSI/MSI-X
> BUG: sleeping function called from invalid context at /home/paul/git/linux-rt/kernel/rtmutex.c:659
> in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
> 2 locks held by swapper/0/0:
>  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
>  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> irq event stamp: 6121390
> hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
> hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
> softirqs last  enabled at (0): [<          (null)>]           (null)
> softirqs last disabled at (0): [<          (null)>]           (null)
> Preemption disabled at:[<ffffffff810ebb9a>] cpu_startup_entry+0x1ba/0x430
> 
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>  ffffffff8201c440 ffff880223603cf0 ffffffff819f177d ffff880223603d18
>  ffffffff810c90d3 ffff880214a50110 0000000000000001 0000000000000001
>  ffff880223603d38 ffffffff819f89a4 ffff880214a50110 ffff880214a50110
> Call Trace:
>  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
>  [<ffffffff810c90d3>] __might_sleep+0x153/0x250
>  [<ffffffff819f89a4>] rt_spin_lock+0x24/0x60
>  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0

-rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
assigned devices directly from the host IRQ handler. We need to disable
this fast-path in -rt or introduce such an abstraction (I did this once
over 2.6.33-rt).

IIRC, VFIO goes the slower patch via a kernel thread unconditionally,
thus cannot trigger this. Only legacy device assignment is affected.

Jan

>  [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
>  [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
>  [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
>  [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>  [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
>  [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
>  [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
>  [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>  [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
>  [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
>  [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
>  [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
>  [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
>  <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>  [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
>  [<ffffffff819db767>] rest_init+0x137/0x140
>  [<ffffffff819db635>] ? rest_init+0x5/0x140
>  [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
>  [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
>  [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
>  [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf
> 
> =================================
> [ INFO: inconsistent lock state ]
> 3.10.10-rt7 #2 Not tainted
> ---------------------------------
> inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
> swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
>  (&(&(&q->lock)->lock)->wait_lock){?.+.-.}, at: [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
> {HARDIRQ-ON-W} state was registered at:
>   [<ffffffff810fc94d>] __lock_acquire+0x69d/0x20e0
>   [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
>   [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
>   [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
>   [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
>   [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>   [<ffffffff8109c5ce>] run_timer_softirq+0x1be/0x390
>   [<ffffffff81092a09>] do_current_softirqs+0x239/0x5b0
>   [<ffffffff81092db8>] run_ksoftirqd+0x38/0x60
>   [<ffffffff810c5d7c>] smpboot_thread_fn+0x22c/0x340
>   [<ffffffff810bbf4d>] kthread+0xcd/0xe0
>   [<ffffffff81a019dc>] ret_from_fork+0x7c/0xb0
> irq event stamp: 6121390
> hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
> hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
> softirqs last  enabled at (0): [<          (null)>]           (null)
> softirqs last disabled at (0): [<          (null)>]           (null)
> 
> other info that might help us debug this:
>  Possible unsafe locking scenario:
> 
>        CPU0
>        ----
>   lock(&(&(&q->lock)->lock)->wait_lock);
>   <Interrupt>
>     lock(&(&(&q->lock)->lock)->wait_lock);
> 
>  *** DEADLOCK ***
> 
> 2 locks held by swapper/0/0:
>  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
>  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> 
> stack backtrace:
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>  ffffffff8262b550 ffff880223603a40 ffffffff819f177d ffff880223603a90
>  ffffffff819ec532 0000000000000000 ffffffff00000000 ffff880200000001
>  0000000000000002 ffffffff8201ccc0 ffffffff810f9040 0000000000000000
> Call Trace:
>  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
>  [<ffffffff819ec532>] print_usage_bug.part.36+0x28b/0x29a
>  [<ffffffff810f9040>] ? check_usage_backwards+0x150/0x150
>  [<ffffffff810f9dab>] mark_lock+0x28b/0x6a0
>  [<ffffffff810fcbf9>] __lock_acquire+0x949/0x20e0
>  [<ffffffff811091f2>] ? __module_text_address+0x12/0x60
>  [<ffffffff8110ea8f>] ? is_module_text_address+0x2f/0x60
>  [<ffffffff810b8408>] ? __kernel_text_address+0x58/0x80
>  [<ffffffff8104dbb2>] ? print_context_stack+0x62/0xf0
>  [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
>  [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
>  [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
>  [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
>  [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
>  [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
>  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
>  [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
>  [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
>  [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
>  [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>  [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
>  [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
>  [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
>  [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>  [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
>  [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
>  [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
>  [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
>  [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
>  <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>  [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
>  [<ffffffff819db767>] rest_init+0x137/0x140
>  [<ffffffff819db635>] ? rest_init+0x5/0x140
>  [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
>  [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
>  [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
>  [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-20 18:04     ` Jan Kiszka
@ 2013-09-20 18:18       ` Paul Gortmaker
  2013-09-20 18:27         ` Jan Kiszka
  2013-09-21 20:26       ` Michael S. Tsirkin
  1 sibling, 1 reply; 20+ messages in thread
From: Paul Gortmaker @ 2013-09-20 18:18 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Paolo Bonzini, linux-kernel, kvm, gleb, linux-rt-users,
	Alex Williamson, Michael S. Tsirkin

On 13-09-20 02:04 PM, Jan Kiszka wrote:
> On 2013-09-20 19:51, Paul Gortmaker wrote:
>> [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul Gortmaker wrote:
>>
>>> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
>>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>>> critical section does not nest with it, but in the end there is no reason
>>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>>> are not preemptable.
>>>>
>>>> This small series thus splits the kvm_lock in the "raw" part and the
>>>> "non-raw" part.
>>>>
>>>> Paul, could you please provide your Tested-by?
>>>
>>> Sure, I'll go back and see if I can find what triggered it in the
>>> original report, and give the patches a spin on 3.4.x-rt (and probably
>>> 3.10.x-rt, since that is where rt-current is presently).
>>
>> Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
>> issues, probably not explicitly related to this patchset (see below).
>>
>> Paul.
>> --
>>
>> e1000e 0000:00:19.0 eth1: removed PHC
>> assign device 0:0:19.0
>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>> BUG: sleeping function called from invalid context at /home/paul/git/linux-rt/kernel/rtmutex.c:659
>> in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
>> 2 locks held by swapper/0/0:
>>  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
>>  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>> irq event stamp: 6121390
>> hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
>> hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
>> softirqs last  enabled at (0): [<          (null)>]           (null)
>> softirqs last disabled at (0): [<          (null)>]           (null)
>> Preemption disabled at:[<ffffffff810ebb9a>] cpu_startup_entry+0x1ba/0x430
>>
>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
>> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>>  ffffffff8201c440 ffff880223603cf0 ffffffff819f177d ffff880223603d18
>>  ffffffff810c90d3 ffff880214a50110 0000000000000001 0000000000000001
>>  ffff880223603d38 ffffffff819f89a4 ffff880214a50110 ffff880214a50110
>> Call Trace:
>>  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
>>  [<ffffffff810c90d3>] __might_sleep+0x153/0x250
>>  [<ffffffff819f89a4>] rt_spin_lock+0x24/0x60
>>  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>>  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
> 
> -rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
> assigned devices directly from the host IRQ handler. We need to disable
> this fast-path in -rt or introduce such an abstraction (I did this once
> over 2.6.33-rt).

Ah, right -- the simple wait queue support (currently -rt specific)
would have to be used here.  It is on the todo list to get that moved
from -rt into mainline.

Paul.
--

> 
> IIRC, VFIO goes the slower patch via a kernel thread unconditionally,
> thus cannot trigger this. Only legacy device assignment is affected.
> 
> Jan
> 
>>  [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
>>  [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
>>  [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
>>  [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>>  [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
>>  [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
>>  [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
>>  [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
>>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>>  [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
>>  [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
>>  [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
>>  [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
>>  [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
>>  <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
>>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>>  [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
>>  [<ffffffff819db767>] rest_init+0x137/0x140
>>  [<ffffffff819db635>] ? rest_init+0x5/0x140
>>  [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
>>  [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
>>  [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
>>  [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf
>>
>> =================================
>> [ INFO: inconsistent lock state ]
>> 3.10.10-rt7 #2 Not tainted
>> ---------------------------------
>> inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
>> swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
>>  (&(&(&q->lock)->lock)->wait_lock){?.+.-.}, at: [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
>> {HARDIRQ-ON-W} state was registered at:
>>   [<ffffffff810fc94d>] __lock_acquire+0x69d/0x20e0
>>   [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
>>   [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
>>   [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
>>   [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
>>   [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>>   [<ffffffff8109c5ce>] run_timer_softirq+0x1be/0x390
>>   [<ffffffff81092a09>] do_current_softirqs+0x239/0x5b0
>>   [<ffffffff81092db8>] run_ksoftirqd+0x38/0x60
>>   [<ffffffff810c5d7c>] smpboot_thread_fn+0x22c/0x340
>>   [<ffffffff810bbf4d>] kthread+0xcd/0xe0
>>   [<ffffffff81a019dc>] ret_from_fork+0x7c/0xb0
>> irq event stamp: 6121390
>> hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
>> hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
>> softirqs last  enabled at (0): [<          (null)>]           (null)
>> softirqs last disabled at (0): [<          (null)>]           (null)
>>
>> other info that might help us debug this:
>>  Possible unsafe locking scenario:
>>
>>        CPU0
>>        ----
>>   lock(&(&(&q->lock)->lock)->wait_lock);
>>   <Interrupt>
>>     lock(&(&(&q->lock)->lock)->wait_lock);
>>
>>  *** DEADLOCK ***
>>
>> 2 locks held by swapper/0/0:
>>  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
>>  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>>
>> stack backtrace:
>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
>> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>>  ffffffff8262b550 ffff880223603a40 ffffffff819f177d ffff880223603a90
>>  ffffffff819ec532 0000000000000000 ffffffff00000000 ffff880200000001
>>  0000000000000002 ffffffff8201ccc0 ffffffff810f9040 0000000000000000
>> Call Trace:
>>  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
>>  [<ffffffff819ec532>] print_usage_bug.part.36+0x28b/0x29a
>>  [<ffffffff810f9040>] ? check_usage_backwards+0x150/0x150
>>  [<ffffffff810f9dab>] mark_lock+0x28b/0x6a0
>>  [<ffffffff810fcbf9>] __lock_acquire+0x949/0x20e0
>>  [<ffffffff811091f2>] ? __module_text_address+0x12/0x60
>>  [<ffffffff8110ea8f>] ? is_module_text_address+0x2f/0x60
>>  [<ffffffff810b8408>] ? __kernel_text_address+0x58/0x80
>>  [<ffffffff8104dbb2>] ? print_context_stack+0x62/0xf0
>>  [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
>>  [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
>>  [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
>>  [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
>>  [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
>>  [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
>>  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>>  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
>>  [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
>>  [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
>>  [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
>>  [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>>  [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
>>  [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
>>  [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
>>  [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
>>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>>  [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
>>  [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
>>  [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
>>  [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
>>  [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
>>  <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
>>  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
>>  [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
>>  [<ffffffff819db767>] rest_init+0x137/0x140
>>  [<ffffffff819db635>] ? rest_init+0x5/0x140
>>  [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
>>  [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
>>  [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
>>  [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-20 18:18       ` Paul Gortmaker
@ 2013-09-20 18:27         ` Jan Kiszka
  0 siblings, 0 replies; 20+ messages in thread
From: Jan Kiszka @ 2013-09-20 18:27 UTC (permalink / raw)
  To: Paul Gortmaker
  Cc: Paolo Bonzini, linux-kernel, kvm, gleb, linux-rt-users,
	Alex Williamson, Michael S. Tsirkin

On 2013-09-20 20:18, Paul Gortmaker wrote:
> On 13-09-20 02:04 PM, Jan Kiszka wrote:
>> On 2013-09-20 19:51, Paul Gortmaker wrote:
>>> [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul Gortmaker wrote:
>>>
>>>> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
>>>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>>>> critical section does not nest with it, but in the end there is no reason
>>>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>>>> are not preemptable.
>>>>>
>>>>> This small series thus splits the kvm_lock in the "raw" part and the
>>>>> "non-raw" part.
>>>>>
>>>>> Paul, could you please provide your Tested-by?
>>>>
>>>> Sure, I'll go back and see if I can find what triggered it in the
>>>> original report, and give the patches a spin on 3.4.x-rt (and probably
>>>> 3.10.x-rt, since that is where rt-current is presently).
>>>
>>> Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
>>> issues, probably not explicitly related to this patchset (see below).
>>>
>>> Paul.
>>> --
>>>
>>> e1000e 0000:00:19.0 eth1: removed PHC
>>> assign device 0:0:19.0
>>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>>> pci 0000:00:19.0: irq 43 for MSI/MSI-X
>>> BUG: sleeping function called from invalid context at /home/paul/git/linux-rt/kernel/rtmutex.c:659
>>> in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
>>> 2 locks held by swapper/0/0:
>>>  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
>>>  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>>> irq event stamp: 6121390
>>> hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
>>> hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
>>> softirqs last  enabled at (0): [<          (null)>]           (null)
>>> softirqs last disabled at (0): [<          (null)>]           (null)
>>> Preemption disabled at:[<ffffffff810ebb9a>] cpu_startup_entry+0x1ba/0x430
>>>
>>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
>>> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>>>  ffffffff8201c440 ffff880223603cf0 ffffffff819f177d ffff880223603d18
>>>  ffffffff810c90d3 ffff880214a50110 0000000000000001 0000000000000001
>>>  ffff880223603d38 ffffffff819f89a4 ffff880214a50110 ffff880214a50110
>>> Call Trace:
>>>  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
>>>  [<ffffffff810c90d3>] __might_sleep+0x153/0x250
>>>  [<ffffffff819f89a4>] rt_spin_lock+0x24/0x60
>>>  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
>>>  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
>>
>> -rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
>> assigned devices directly from the host IRQ handler. We need to disable
>> this fast-path in -rt or introduce such an abstraction (I did this once
>> over 2.6.33-rt).
> 
> Ah, right -- the simple wait queue support (currently -rt specific)
> would have to be used here.  It is on the todo list to get that moved
> from -rt into mainline.

Oh, it's there in -rt already - perfect! If there is a good reason for
upstream, kvm can switch of course.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-20 18:04     ` Jan Kiszka
  2013-09-20 18:18       ` Paul Gortmaker
@ 2013-09-21 20:26       ` Michael S. Tsirkin
  1 sibling, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2013-09-21 20:26 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Paul Gortmaker, Paolo Bonzini, linux-kernel, kvm, gleb,
	linux-rt-users, Alex Williamson

On Fri, Sep 20, 2013 at 08:04:19PM +0200, Jan Kiszka wrote:
> On 2013-09-20 19:51, Paul Gortmaker wrote:
> > [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul Gortmaker wrote:
> > 
> >> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
> >>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> >>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> >>> patch that shrunk the kvm_lock critical section so that the mmu_lock
> >>> critical section does not nest with it, but in the end there is no reason
> >>> for the vm_list to be protected by a raw spinlock.  Only manipulations
> >>> of kvm_usage_count and the consequent hardware_enable/disable operations
> >>> are not preemptable.
> >>>
> >>> This small series thus splits the kvm_lock in the "raw" part and the
> >>> "non-raw" part.
> >>>
> >>> Paul, could you please provide your Tested-by?
> >>
> >> Sure, I'll go back and see if I can find what triggered it in the
> >> original report, and give the patches a spin on 3.4.x-rt (and probably
> >> 3.10.x-rt, since that is where rt-current is presently).
> > 
> > Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
> > issues, probably not explicitly related to this patchset (see below).
> > 
> > Paul.
> > --
> > 
> > e1000e 0000:00:19.0 eth1: removed PHC
> > assign device 0:0:19.0
> > pci 0000:00:19.0: irq 43 for MSI/MSI-X
> > pci 0000:00:19.0: irq 43 for MSI/MSI-X
> > pci 0000:00:19.0: irq 43 for MSI/MSI-X
> > pci 0000:00:19.0: irq 43 for MSI/MSI-X
> > BUG: sleeping function called from invalid context at /home/paul/git/linux-rt/kernel/rtmutex.c:659
> > in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
> > 2 locks held by swapper/0/0:
> >  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
> >  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> > irq event stamp: 6121390
> > hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
> > hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
> > softirqs last  enabled at (0): [<          (null)>]           (null)
> > softirqs last disabled at (0): [<          (null)>]           (null)
> > Preemption disabled at:[<ffffffff810ebb9a>] cpu_startup_entry+0x1ba/0x430
> > 
> > CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
> > Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
> >  ffffffff8201c440 ffff880223603cf0 ffffffff819f177d ffff880223603d18
> >  ffffffff810c90d3 ffff880214a50110 0000000000000001 0000000000000001
> >  ffff880223603d38 ffffffff819f89a4 ffff880214a50110 ffff880214a50110
> > Call Trace:
> >  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
> >  [<ffffffff810c90d3>] __might_sleep+0x153/0x250
> >  [<ffffffff819f89a4>] rt_spin_lock+0x24/0x60
> >  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
> >  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
> 
> -rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
> assigned devices directly from the host IRQ handler. We need to disable
> this fast-path in -rt or introduce such an abstraction (I did this once
> over 2.6.33-rt).
> 
> IIRC, VFIO goes the slower patch via a kernel thread unconditionally,
> thus cannot trigger this.

AFAIK VFIO just uses eventfds and these can
inject MSI interrupts directly from IRQ without going through a thread.


> Only legacy device assignment is affected.
> 
> Jan
> 
> >  [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
> >  [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
> >  [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
> >  [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> >  [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
> >  [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
> >  [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
> >  [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
> >  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
> >  [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
> >  [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
> >  [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
> >  [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
> >  [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
> >  <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
> >  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
> >  [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
> >  [<ffffffff819db767>] rest_init+0x137/0x140
> >  [<ffffffff819db635>] ? rest_init+0x5/0x140
> >  [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
> >  [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
> >  [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
> >  [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf
> > 
> > =================================
> > [ INFO: inconsistent lock state ]
> > 3.10.10-rt7 #2 Not tainted
> > ---------------------------------
> > inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
> > swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
> >  (&(&(&q->lock)->lock)->wait_lock){?.+.-.}, at: [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
> > {HARDIRQ-ON-W} state was registered at:
> >   [<ffffffff810fc94d>] __lock_acquire+0x69d/0x20e0
> >   [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
> >   [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
> >   [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
> >   [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
> >   [<ffffffff810ccdd6>] __wake_up+0x36/0x70
> >   [<ffffffff8109c5ce>] run_timer_softirq+0x1be/0x390
> >   [<ffffffff81092a09>] do_current_softirqs+0x239/0x5b0
> >   [<ffffffff81092db8>] run_ksoftirqd+0x38/0x60
> >   [<ffffffff810c5d7c>] smpboot_thread_fn+0x22c/0x340
> >   [<ffffffff810bbf4d>] kthread+0xcd/0xe0
> >   [<ffffffff81a019dc>] ret_from_fork+0x7c/0xb0
> > irq event stamp: 6121390
> > hardirqs last  enabled at (6121389): [<ffffffff819f9ae0>] restore_args+0x0/0x30
> > hardirqs last disabled at (6121390): [<ffffffff819f9a2a>] common_interrupt+0x6a/0x6f
> > softirqs last  enabled at (0): [<          (null)>]           (null)
> > softirqs last disabled at (0): [<          (null)>]           (null)
> > 
> > other info that might help us debug this:
> >  Possible unsafe locking scenario:
> > 
> >        CPU0
> >        ----
> >   lock(&(&(&q->lock)->lock)->wait_lock);
> >   <Interrupt>
> >     lock(&(&(&q->lock)->lock)->wait_lock);
> > 
> >  *** DEADLOCK ***
> > 
> > 2 locks held by swapper/0/0:
> >  #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff8100998a>] kvm_set_irq_inatomic+0x2a/0x4a0
> >  #1:  (rcu_read_lock){.+.+.+}, at: [<ffffffff81038800>] kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> > 
> > stack backtrace:
> > CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
> > Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
> >  ffffffff8262b550 ffff880223603a40 ffffffff819f177d ffff880223603a90
> >  ffffffff819ec532 0000000000000000 ffffffff00000000 ffff880200000001
> >  0000000000000002 ffffffff8201ccc0 ffffffff810f9040 0000000000000000
> > Call Trace:
> >  <IRQ>  [<ffffffff819f177d>] dump_stack+0x19/0x1b
> >  [<ffffffff819ec532>] print_usage_bug.part.36+0x28b/0x29a
> >  [<ffffffff810f9040>] ? check_usage_backwards+0x150/0x150
> >  [<ffffffff810f9dab>] mark_lock+0x28b/0x6a0
> >  [<ffffffff810fcbf9>] __lock_acquire+0x949/0x20e0
> >  [<ffffffff811091f2>] ? __module_text_address+0x12/0x60
> >  [<ffffffff8110ea8f>] ? is_module_text_address+0x2f/0x60
> >  [<ffffffff810b8408>] ? __kernel_text_address+0x58/0x80
> >  [<ffffffff8104dbb2>] ? print_context_stack+0x62/0xf0
> >  [<ffffffff810feaee>] lock_acquire+0x9e/0x1f0
> >  [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
> >  [<ffffffff819f9090>] _raw_spin_lock+0x40/0x80
> >  [<ffffffff819f7e98>] ? rt_spin_lock_slowlock+0x48/0x370
> >  [<ffffffff819f7e98>] rt_spin_lock_slowlock+0x48/0x370
> >  [<ffffffff819f89ac>] rt_spin_lock+0x2c/0x60
> >  [<ffffffff810ccdd6>] __wake_up+0x36/0x70
> >  [<ffffffff81003bbb>] kvm_vcpu_kick+0x3b/0xd0
> >  [<ffffffff810371a2>] __apic_accept_irq+0x2b2/0x3a0
> >  [<ffffffff810385f7>] kvm_apic_set_irq+0x27/0x30
> >  [<ffffffff8103894e>] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
> >  [<ffffffff81038800>] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> >  [<ffffffff81009a8b>] kvm_set_irq_inatomic+0x12b/0x4a0
> >  [<ffffffff8100998a>] ? kvm_set_irq_inatomic+0x2a/0x4a0
> >  [<ffffffff8100c5b3>] kvm_assigned_dev_msi+0x23/0x40
> >  [<ffffffff8113cb38>] handle_irq_event_percpu+0x88/0x3d0
> >  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
> >  [<ffffffff8113cec8>] handle_irq_event+0x48/0x70
> >  [<ffffffff8113f9b7>] handle_edge_irq+0x77/0x120
> >  [<ffffffff8104c6ae>] handle_irq+0x1e/0x30
> >  [<ffffffff81a035ca>] do_IRQ+0x5a/0xd0
> >  [<ffffffff819f9a2f>] common_interrupt+0x6f/0x6f
> >  <EOI>  [<ffffffff819f9ae0>] ? retint_restore_args+0xe/0xe
> >  [<ffffffff810ebb7c>] ? cpu_startup_entry+0x19c/0x430
> >  [<ffffffff810ebb38>] ? cpu_startup_entry+0x158/0x430
> >  [<ffffffff819db767>] rest_init+0x137/0x140
> >  [<ffffffff819db635>] ? rest_init+0x5/0x140
> >  [<ffffffff822fde18>] start_kernel+0x3af/0x3bc
> >  [<ffffffff822fd870>] ? repair_env_string+0x5e/0x5e
> >  [<ffffffff822fd5a5>] x86_64_start_reservations+0x2a/0x2c
> >  [<ffffffff822fd673>] x86_64_start_kernel+0xcc/0xcf
> 
> -- 
> Siemens AG, Corporate Technology, CT RTC ITP SES-DE
> Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-16 14:06 [PATCH 0/3] KVM: Make kvm_lock non-raw Paolo Bonzini
                   ` (3 preceding siblings ...)
  2013-09-16 22:12 ` [PATCH 0/3] KVM: Make kvm_lock non-raw Paul Gortmaker
@ 2013-09-22  7:42 ` Gleb Natapov
  2013-09-22  8:53   ` Paolo Bonzini
  4 siblings, 1 reply; 20+ messages in thread
From: Gleb Natapov @ 2013-09-22  7:42 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, Paul Gortmaker, kvm, jan.kiszka

On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> patch that shrunk the kvm_lock critical section so that the mmu_lock
> critical section does not nest with it, but in the end there is no reason
> for the vm_list to be protected by a raw spinlock.  Only manipulations
> of kvm_usage_count and the consequent hardware_enable/disable operations
> are not preemptable.
> 
> This small series thus splits the kvm_lock in the "raw" part and the
> "non-raw" part.
> 
> Paul, could you please provide your Tested-by?
> 
Reviewed-by: Gleb Natapov <gleb@redhat.com>

But why should it go to stable?

> Thanks,
> 
> Paolo
> 
> Paolo Bonzini (3):
>   KVM: cleanup (physical) CPU hotplug
>   KVM: protect kvm_usage_count with its own spinlock
>   KVM: Convert kvm_lock back to non-raw spinlock
> 
>  Documentation/virtual/kvm/locking.txt |  8 ++++--
>  arch/x86/kvm/mmu.c                    |  4 +--
>  arch/x86/kvm/x86.c                    |  8 +++---
>  include/linux/kvm_host.h              |  2 +-
>  virt/kvm/kvm_main.c                   | 51 ++++++++++++++++++-----------------
>  5 files changed, 40 insertions(+), 33 deletions(-)
> 
> -- 
> 1.8.3.1

--
			Gleb.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-22  7:42 ` Gleb Natapov
@ 2013-09-22  8:53   ` Paolo Bonzini
  2013-09-22  9:53     ` Gleb Natapov
  0 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-22  8:53 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: linux-kernel, Paul Gortmaker, kvm, jan.kiszka

Il 22/09/2013 09:42, Gleb Natapov ha scritto:
> On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>> critical section does not nest with it, but in the end there is no reason
>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>> of kvm_usage_count and the consequent hardware_enable/disable operations
>> are not preemptable.
>>
>> This small series thus splits the kvm_lock in the "raw" part and the
>> "non-raw" part.
>>
>> Paul, could you please provide your Tested-by?
>>
> Reviewed-by: Gleb Natapov <gleb@redhat.com>
> 
> But why should it go to stable?

It is a regression from before the kvm_lock was made raw.  Secondarily,
it takes a much longer time before a patch hits -rt trees (can even be
as much as a year) and this patch does nothing on non-rt trees.  So
without putting it into stable it would get no actual coverage.

Paolo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-22  8:53   ` Paolo Bonzini
@ 2013-09-22  9:53     ` Gleb Natapov
  2013-09-23  6:30       ` Jan Kiszka
  2013-09-23 13:36       ` Paul Gortmaker
  0 siblings, 2 replies; 20+ messages in thread
From: Gleb Natapov @ 2013-09-22  9:53 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, Paul Gortmaker, kvm, jan.kiszka

On Sun, Sep 22, 2013 at 10:53:14AM +0200, Paolo Bonzini wrote:
> Il 22/09/2013 09:42, Gleb Natapov ha scritto:
> > On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
> >> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> >> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> >> patch that shrunk the kvm_lock critical section so that the mmu_lock
> >> critical section does not nest with it, but in the end there is no reason
> >> for the vm_list to be protected by a raw spinlock.  Only manipulations
> >> of kvm_usage_count and the consequent hardware_enable/disable operations
> >> are not preemptable.
> >>
> >> This small series thus splits the kvm_lock in the "raw" part and the
> >> "non-raw" part.
> >>
> >> Paul, could you please provide your Tested-by?
> >>
> > Reviewed-by: Gleb Natapov <gleb@redhat.com>
> > 
> > But why should it go to stable?
> 
> It is a regression from before the kvm_lock was made raw.  Secondarily,
It was made raw in 2.6.39 and commit message claims that it is done for
-rt sake, why regression was noticed only now?

> it takes a much longer time before a patch hits -rt trees (can even be
> as much as a year) and this patch does nothing on non-rt trees.  So
> without putting it into stable it would get no actual coverage.
> 
The change is not completely trivial, it splits lock. There is no
obvious problem of course, otherwise you wouldn't send it and I
would ack it :), but it does not mean that the chance for problem is
zero, so why risk stability of stable even a little bit if the patch
does not fix anything in stable?

I do not know how -rt development goes and how it affects decisions for
stable acceptance, why can't they carry the patch in their tree until
they move to 3.12?

--
			Gleb.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-22  9:53     ` Gleb Natapov
@ 2013-09-23  6:30       ` Jan Kiszka
  2013-09-23 13:36       ` Paul Gortmaker
  1 sibling, 0 replies; 20+ messages in thread
From: Jan Kiszka @ 2013-09-23  6:30 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Paolo Bonzini, linux-kernel, Paul Gortmaker, kvm, Steven Rostedt

On 2013-09-22 11:53, Gleb Natapov wrote:
> On Sun, Sep 22, 2013 at 10:53:14AM +0200, Paolo Bonzini wrote:
>> Il 22/09/2013 09:42, Gleb Natapov ha scritto:
>>> On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
>>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>>> critical section does not nest with it, but in the end there is no reason
>>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>>> are not preemptable.
>>>>
>>>> This small series thus splits the kvm_lock in the "raw" part and the
>>>> "non-raw" part.
>>>>
>>>> Paul, could you please provide your Tested-by?
>>>>
>>> Reviewed-by: Gleb Natapov <gleb@redhat.com>
>>>
>>> But why should it go to stable?
>>
>> It is a regression from before the kvm_lock was made raw.  Secondarily,
> It was made raw in 2.6.39 and commit message claims that it is done for
> -rt sake, why regression was noticed only now?

Probably, the patch is stressed to infrequently. Just checked: the issue
was present from day #1 one, what a shame.

> 
>> it takes a much longer time before a patch hits -rt trees (can even be
>> as much as a year) and this patch does nothing on non-rt trees.  So
>> without putting it into stable it would get no actual coverage.
>>
> The change is not completely trivial, it splits lock. There is no
> obvious problem of course, otherwise you wouldn't send it and I
> would ack it :), but it does not mean that the chance for problem is
> zero, so why risk stability of stable even a little bit if the patch
> does not fix anything in stable?
> 
> I do not know how -rt development goes and how it affects decisions for
> stable acceptance, why can't they carry the patch in their tree until
> they move to 3.12?

I think it would be fair to let stable -rt carry these. -rt requires
more specific patching anyway due to the waitqueue issue Paul reported.
But CC'ing Steven to obtain his view.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-22  9:53     ` Gleb Natapov
  2013-09-23  6:30       ` Jan Kiszka
@ 2013-09-23 13:36       ` Paul Gortmaker
  2013-09-23 13:44         ` Paolo Bonzini
  1 sibling, 1 reply; 20+ messages in thread
From: Paul Gortmaker @ 2013-09-23 13:36 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: Paolo Bonzini, linux-kernel, kvm, jan.kiszka

On 13-09-22 05:53 AM, Gleb Natapov wrote:
> On Sun, Sep 22, 2013 at 10:53:14AM +0200, Paolo Bonzini wrote:
>> Il 22/09/2013 09:42, Gleb Natapov ha scritto:
>>> On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
>>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>>> critical section does not nest with it, but in the end there is no reason
>>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>>> are not preemptable.
>>>>
>>>> This small series thus splits the kvm_lock in the "raw" part and the
>>>> "non-raw" part.
>>>>
>>>> Paul, could you please provide your Tested-by?
>>>>
>>> Reviewed-by: Gleb Natapov <gleb@redhat.com>
>>>
>>> But why should it go to stable?
>>
>> It is a regression from before the kvm_lock was made raw.  Secondarily,
> It was made raw in 2.6.39 and commit message claims that it is done for
> -rt sake, why regression was noticed only now?
> 
>> it takes a much longer time before a patch hits -rt trees (can even be
>> as much as a year) and this patch does nothing on non-rt trees.  So
>> without putting it into stable it would get no actual coverage.
>>
> The change is not completely trivial, it splits lock. There is no
> obvious problem of course, otherwise you wouldn't send it and I
> would ack it :), but it does not mean that the chance for problem is
> zero, so why risk stability of stable even a little bit if the patch
> does not fix anything in stable?
> 
> I do not know how -rt development goes and how it affects decisions for
> stable acceptance, why can't they carry the patch in their tree until
> they move to 3.12?

The -rt tree regularly carries mainline backports that are of interest
to -rt but perhaps not of interest to stable, so there is no problem
doing the same with content like this, if desired.

Thanks,
Paul.
--

> 
> --
> 			Gleb.
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-23 13:36       ` Paul Gortmaker
@ 2013-09-23 13:44         ` Paolo Bonzini
  2013-09-23 14:59           ` Gleb Natapov
  0 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-23 13:44 UTC (permalink / raw)
  To: Paul Gortmaker; +Cc: Gleb Natapov, linux-kernel, kvm, jan.kiszka

Il 23/09/2013 15:36, Paul Gortmaker ha scritto:
>> > The change is not completely trivial, it splits lock. There is no
>> > obvious problem of course, otherwise you wouldn't send it and I
>> > would ack it :), but it does not mean that the chance for problem is
>> > zero, so why risk stability of stable even a little bit if the patch
>> > does not fix anything in stable?
>> > 
>> > I do not know how -rt development goes and how it affects decisions for
>> > stable acceptance, why can't they carry the patch in their tree until
>> > they move to 3.12?
> The -rt tree regularly carries mainline backports that are of interest
> to -rt but perhaps not of interest to stable, so there is no problem
> doing the same with content like this, if desired.

Perfect, I'll queue [v2 of] these patches for 3.12 then.

Paolo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-23 13:44         ` Paolo Bonzini
@ 2013-09-23 14:59           ` Gleb Natapov
  2013-09-23 15:05             ` Paolo Bonzini
  0 siblings, 1 reply; 20+ messages in thread
From: Gleb Natapov @ 2013-09-23 14:59 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Paul Gortmaker, linux-kernel, kvm, jan.kiszka

On Mon, Sep 23, 2013 at 03:44:21PM +0200, Paolo Bonzini wrote:
> Il 23/09/2013 15:36, Paul Gortmaker ha scritto:
> >> > The change is not completely trivial, it splits lock. There is no
> >> > obvious problem of course, otherwise you wouldn't send it and I
> >> > would ack it :), but it does not mean that the chance for problem is
> >> > zero, so why risk stability of stable even a little bit if the patch
> >> > does not fix anything in stable?
> >> > 
> >> > I do not know how -rt development goes and how it affects decisions for
> >> > stable acceptance, why can't they carry the patch in their tree until
> >> > they move to 3.12?
> > The -rt tree regularly carries mainline backports that are of interest
> > to -rt but perhaps not of interest to stable, so there is no problem
> > doing the same with content like this, if desired.
> 
> Perfect, I'll queue [v2 of] these patches for 3.12 then.
> 
Why 3.12 if it is not going to stable?

--
			Gleb.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/3] KVM: Make kvm_lock non-raw
  2013-09-23 14:59           ` Gleb Natapov
@ 2013-09-23 15:05             ` Paolo Bonzini
  0 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2013-09-23 15:05 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: Paul Gortmaker, linux-kernel, kvm, jan.kiszka

Il 23/09/2013 16:59, Gleb Natapov ha scritto:
> > Perfect, I'll queue [v2 of] these patches for 3.12 then.
> 
> Why 3.12 if it is not going to stable?

Off-by-one. :)

Paolo

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2013-09-23 15:05 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-16 14:06 [PATCH 0/3] KVM: Make kvm_lock non-raw Paolo Bonzini
2013-09-16 14:06 ` [PATCH 1/3] KVM: cleanup (physical) CPU hotplug Paolo Bonzini
2013-09-17  7:57   ` Jan Kiszka
2013-09-17 23:19     ` Paolo Bonzini
2013-09-16 14:06 ` [PATCH 2/3] KVM: protect kvm_usage_count with its own spinlock Paolo Bonzini
2013-09-16 14:06 ` [PATCH 3/3] KVM: Convert kvm_lock back to non-raw spinlock Paolo Bonzini
2013-09-16 22:12 ` [PATCH 0/3] KVM: Make kvm_lock non-raw Paul Gortmaker
2013-09-20 17:51   ` Paul Gortmaker
2013-09-20 18:04     ` Jan Kiszka
2013-09-20 18:18       ` Paul Gortmaker
2013-09-20 18:27         ` Jan Kiszka
2013-09-21 20:26       ` Michael S. Tsirkin
2013-09-22  7:42 ` Gleb Natapov
2013-09-22  8:53   ` Paolo Bonzini
2013-09-22  9:53     ` Gleb Natapov
2013-09-23  6:30       ` Jan Kiszka
2013-09-23 13:36       ` Paul Gortmaker
2013-09-23 13:44         ` Paolo Bonzini
2013-09-23 14:59           ` Gleb Natapov
2013-09-23 15:05             ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).