All of lore.kernel.org
 help / color / mirror / Atom feed
From: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-s390@vger.kernel.org,
	xen-devel-request@lists.xenproject.org, kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org, x86@kernel.org
Cc: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au,
	mingo@redhat.com, peterz@infradead.org,
	paulmck@linux.vnet.ibm.com, will.deacon@arm.com,
	kernellwp@gmail.com, jgross@suse.com, pbonzini@redhat.com,
	bsingharora@gmail.com, boqun.feng@gmail.com,
	borntraeger@de.ibm.com, rkrcmar@redhat.com,
	David.Laight@ACULAB.COM, dave@stgolabs.net,
	konrad.wilk@oracle.com,
	Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Subject: [PATCH v7 03/11] kernel/locking: Drop the overload of {mutex,rwsem}_spin_on_owner
Date: Wed,  2 Nov 2016 05:08:30 -0400	[thread overview]
Message-ID: <1478077718-37424-4-git-send-email-xinhui.pan@linux.vnet.ibm.com> (raw)
In-Reply-To: <1478077718-37424-1-git-send-email-xinhui.pan@linux.vnet.ibm.com>

An over-committed guest with more vCPUs than pCPUs has a heavy overload
in the two spin_on_owner. This blames on the lock holder preemption
issue.

Kernel has an interface bool vcpu_is_preempted(int cpu) to see if a vCPU
is currently running or not. So break the spin loops on true condition.

test-case:
perf record -a perf bench sched messaging -g 400 -p && perf report

before patch:
20.68%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner
 8.45%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 4.12%  sched-messaging  [kernel.vmlinux]  [k] system_call
 3.01%  sched-messaging  [kernel.vmlinux]  [k] system_call_common
 2.83%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 2.64%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
 2.00%  sched-messaging  [kernel.vmlinux]  [k] osq_lock

after patch:
 9.99%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 5.28%  sched-messaging  [unknown]         [H] 0xc0000000000768e0
 4.27%  sched-messaging  [kernel.vmlinux]  [k] __copy_tofrom_user_power7
 3.77%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 3.24%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
 3.02%  sched-messaging  [kernel.vmlinux]  [k] system_call
 2.69%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task

Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Juergen Gross <jgross@suse.com>
---
 kernel/locking/mutex.c      | 13 +++++++++++--
 kernel/locking/rwsem-xadd.c | 14 +++++++++++---
 2 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a70b90d..24face6 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -236,7 +236,11 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 		 */
 		barrier();
 
-		if (!owner->on_cpu || need_resched()) {
+		/*
+		 * Use vcpu_is_preempted to detect lock holder preemption issue.
+		 */
+		if (!owner->on_cpu || need_resched() ||
+				vcpu_is_preempted(task_cpu(owner))) {
 			ret = false;
 			break;
 		}
@@ -261,8 +265,13 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
 
 	rcu_read_lock();
 	owner = READ_ONCE(lock->owner);
+
+	/*
+	 * As lock holder preemption issue, we both skip spinning if task is not
+	 * on cpu or its cpu is preempted
+	 */
 	if (owner)
-		retval = owner->on_cpu;
+		retval = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 	rcu_read_unlock();
 	/*
 	 * if lock->owner is not set, the mutex owner may have just acquired
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 2337b4b..b664ce1 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -336,7 +336,11 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
 		goto done;
 	}
 
-	ret = owner->on_cpu;
+	/*
+	 * As lock holder preemption issue, we both skip spinning if task is not
+	 * on cpu or its cpu is preempted
+	 */
+	ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 done:
 	rcu_read_unlock();
 	return ret;
@@ -362,8 +366,12 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
 		 */
 		barrier();
 
-		/* abort spinning when need_resched or owner is not running */
-		if (!owner->on_cpu || need_resched()) {
+		/*
+		 * abort spinning when need_resched or owner is not running or
+		 * owner's cpu is preempted.
+		 */
+		if (!owner->on_cpu || need_resched() ||
+				vcpu_is_preempted(task_cpu(owner))) {
 			rcu_read_unlock();
 			return false;
 		}
-- 
2.4.11

WARNING: multiple messages have this Message-ID (diff)
From: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-s390@vger.kernel.org,
	xen-devel-request@lists.xenproject.org, kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org, x86@kernel.org
Cc: kernellwp@gmail.com, jgross@suse.com, dave@stgolabs.net,
	David.Laight@ACULAB.COM, rkrcmar@redhat.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	konrad.wilk@oracle.com, will.deacon@arm.com,
	Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>,
	mingo@redhat.com, paulus@samba.org, mpe@ellerman.id.au,
	pbonzini@redhat.com, paulmck@linux.vnet.ibm.com,
	boqun.feng@gmail.com
Subject: [PATCH v7 03/11] kernel/locking: Drop the overload of {mutex, rwsem}_spin_on_owner
Date: Wed,  2 Nov 2016 05:08:30 -0400	[thread overview]
Message-ID: <1478077718-37424-4-git-send-email-xinhui.pan@linux.vnet.ibm.com> (raw)
In-Reply-To: <1478077718-37424-1-git-send-email-xinhui.pan@linux.vnet.ibm.com>

An over-committed guest with more vCPUs than pCPUs has a heavy overload
in the two spin_on_owner. This blames on the lock holder preemption
issue.

Kernel has an interface bool vcpu_is_preempted(int cpu) to see if a vCPU
is currently running or not. So break the spin loops on true condition.

test-case:
perf record -a perf bench sched messaging -g 400 -p && perf report

before patch:
20.68%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner
 8.45%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 4.12%  sched-messaging  [kernel.vmlinux]  [k] system_call
 3.01%  sched-messaging  [kernel.vmlinux]  [k] system_call_common
 2.83%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 2.64%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
 2.00%  sched-messaging  [kernel.vmlinux]  [k] osq_lock

after patch:
 9.99%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 5.28%  sched-messaging  [unknown]         [H] 0xc0000000000768e0
 4.27%  sched-messaging  [kernel.vmlinux]  [k] __copy_tofrom_user_power7
 3.77%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 3.24%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
 3.02%  sched-messaging  [kernel.vmlinux]  [k] system_call
 2.69%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task

Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Juergen Gross <jgross@suse.com>
---
 kernel/locking/mutex.c      | 13 +++++++++++--
 kernel/locking/rwsem-xadd.c | 14 +++++++++++---
 2 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a70b90d..24face6 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -236,7 +236,11 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 		 */
 		barrier();
 
-		if (!owner->on_cpu || need_resched()) {
+		/*
+		 * Use vcpu_is_preempted to detect lock holder preemption issue.
+		 */
+		if (!owner->on_cpu || need_resched() ||
+				vcpu_is_preempted(task_cpu(owner))) {
 			ret = false;
 			break;
 		}
@@ -261,8 +265,13 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
 
 	rcu_read_lock();
 	owner = READ_ONCE(lock->owner);
+
+	/*
+	 * As lock holder preemption issue, we both skip spinning if task is not
+	 * on cpu or its cpu is preempted
+	 */
 	if (owner)
-		retval = owner->on_cpu;
+		retval = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 	rcu_read_unlock();
 	/*
 	 * if lock->owner is not set, the mutex owner may have just acquired
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 2337b4b..b664ce1 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -336,7 +336,11 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
 		goto done;
 	}
 
-	ret = owner->on_cpu;
+	/*
+	 * As lock holder preemption issue, we both skip spinning if task is not
+	 * on cpu or its cpu is preempted
+	 */
+	ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 done:
 	rcu_read_unlock();
 	return ret;
@@ -362,8 +366,12 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
 		 */
 		barrier();
 
-		/* abort spinning when need_resched or owner is not running */
-		if (!owner->on_cpu || need_resched()) {
+		/*
+		 * abort spinning when need_resched or owner is not running or
+		 * owner's cpu is preempted.
+		 */
+		if (!owner->on_cpu || need_resched() ||
+				vcpu_is_preempted(task_cpu(owner))) {
 			rcu_read_unlock();
 			return false;
 		}
-- 
2.4.11

WARNING: multiple messages have this Message-ID (diff)
From: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-s390@vger.kernel.org,
	xen-devel-request@lists.xenproject.org, kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org, x86@kernel.org
Cc: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au,
	mingo@redhat.com, peterz@infradead.org,
	paulmck@linux.vnet.ibm.com, will.deacon@arm.com,
	kernellwp@gmail.com, jgross@suse.com, pbonzini@redhat.com,
	bsingharora@gmail.com, boqun.feng@gmail.com,
	borntraeger@de.ibm.com, rkrcmar@redhat.com,
	David.Laight@ACULAB.COM, dave@stgolabs.net,
	konrad.wilk@oracle.com,
	Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Subject: [PATCH v7 03/11] kernel/locking: Drop the overload of {mutex, rwsem}_spin_on_owner
Date: Wed,  2 Nov 2016 05:08:30 -0400	[thread overview]
Message-ID: <1478077718-37424-4-git-send-email-xinhui.pan@linux.vnet.ibm.com> (raw)
In-Reply-To: <1478077718-37424-1-git-send-email-xinhui.pan@linux.vnet.ibm.com>

An over-committed guest with more vCPUs than pCPUs has a heavy overload
in the two spin_on_owner. This blames on the lock holder preemption
issue.

Kernel has an interface bool vcpu_is_preempted(int cpu) to see if a vCPU
is currently running or not. So break the spin loops on true condition.

test-case:
perf record -a perf bench sched messaging -g 400 -p && perf report

before patch:
20.68%  sched-messaging  [kernel.vmlinux]  [k] mutex_spin_on_owner
 8.45%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 4.12%  sched-messaging  [kernel.vmlinux]  [k] system_call
 3.01%  sched-messaging  [kernel.vmlinux]  [k] system_call_common
 2.83%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 2.64%  sched-messaging  [kernel.vmlinux]  [k] rwsem_spin_on_owner
 2.00%  sched-messaging  [kernel.vmlinux]  [k] osq_lock

after patch:
 9.99%  sched-messaging  [kernel.vmlinux]  [k] mutex_unlock
 5.28%  sched-messaging  [unknown]         [H] 0xc0000000000768e0
 4.27%  sched-messaging  [kernel.vmlinux]  [k] __copy_tofrom_user_power7
 3.77%  sched-messaging  [kernel.vmlinux]  [k] copypage_power7
 3.24%  sched-messaging  [kernel.vmlinux]  [k] _raw_write_lock_irq
 3.02%  sched-messaging  [kernel.vmlinux]  [k] system_call
 2.69%  sched-messaging  [kernel.vmlinux]  [k] wait_consider_task

Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Juergen Gross <jgross@suse.com>
---
 kernel/locking/mutex.c      | 13 +++++++++++--
 kernel/locking/rwsem-xadd.c | 14 +++++++++++---
 2 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a70b90d..24face6 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -236,7 +236,11 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 		 */
 		barrier();
 
-		if (!owner->on_cpu || need_resched()) {
+		/*
+		 * Use vcpu_is_preempted to detect lock holder preemption issue.
+		 */
+		if (!owner->on_cpu || need_resched() ||
+				vcpu_is_preempted(task_cpu(owner))) {
 			ret = false;
 			break;
 		}
@@ -261,8 +265,13 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
 
 	rcu_read_lock();
 	owner = READ_ONCE(lock->owner);
+
+	/*
+	 * As lock holder preemption issue, we both skip spinning if task is not
+	 * on cpu or its cpu is preempted
+	 */
 	if (owner)
-		retval = owner->on_cpu;
+		retval = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 	rcu_read_unlock();
 	/*
 	 * if lock->owner is not set, the mutex owner may have just acquired
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 2337b4b..b664ce1 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -336,7 +336,11 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
 		goto done;
 	}
 
-	ret = owner->on_cpu;
+	/*
+	 * As lock holder preemption issue, we both skip spinning if task is not
+	 * on cpu or its cpu is preempted
+	 */
+	ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
 done:
 	rcu_read_unlock();
 	return ret;
@@ -362,8 +366,12 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
 		 */
 		barrier();
 
-		/* abort spinning when need_resched or owner is not running */
-		if (!owner->on_cpu || need_resched()) {
+		/*
+		 * abort spinning when need_resched or owner is not running or
+		 * owner's cpu is preempted.
+		 */
+		if (!owner->on_cpu || need_resched() ||
+				vcpu_is_preempted(task_cpu(owner))) {
 			rcu_read_unlock();
 			return false;
 		}
-- 
2.4.11

  parent reply	other threads:[~2016-11-02  5:13 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-02  9:08 [PATCH v7 00/11] implement vcpu preempted check Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 01/11] kernel/sched: introduce vcpu preempted check interface Pan Xinhui
2016-11-02  9:08   ` Pan Xinhui
2016-11-22 12:31   ` [tip:locking/core] sched/core: Introduce the vcpu_is_preempted(cpu) interface tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 01/11] kernel/sched: introduce vcpu preempted check interface Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 02/11] locking/osq: Drop the overload of osq_lock() Pan Xinhui
2016-11-02  9:08   ` Pan Xinhui
2016-11-22 12:36   ` [tip:locking/core] locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock() tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 02/11] locking/osq: Drop the overload of osq_lock() Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui [this message]
2016-11-02  9:08   ` [PATCH v7 03/11] kernel/locking: Drop the overload of {mutex, rwsem}_spin_on_owner Pan Xinhui
2016-11-02  9:08   ` Pan Xinhui
2016-11-22 12:36   ` [tip:locking/core] locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 03/11] kernel/locking: Drop the overload of {mutex, rwsem}_spin_on_owner Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 04/11] powerpc/spinlock: support vcpu preempted check Pan Xinhui
2016-11-02  9:08   ` Pan Xinhui
2016-11-22 12:32   ` [tip:locking/core] locking/core, powerpc: Implement vcpu_is_preempted(cpu) tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 04/11] powerpc/spinlock: support vcpu preempted check Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 05/11] s390/spinlock: Provide vcpu_is_preempted Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-22 12:32   ` [tip:locking/core] locking/spinlocks, s390: Implement vcpu_is_preempted(cpu) tip-bot for Christian Borntraeger
2016-11-02  9:08 ` [PATCH v7 05/11] s390/spinlock: Provide vcpu_is_preempted Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check Pan Xinhui
2016-11-02  9:08   ` Pan Xinhui
2016-11-15 15:47   ` Peter Zijlstra
2016-11-15 15:47     ` Peter Zijlstra
2016-11-16  4:19     ` Pan Xinhui
2016-11-16  4:19     ` Pan Xinhui
2016-11-16 10:23       ` Peter Zijlstra
2016-11-16 10:23         ` Peter Zijlstra
2016-11-16 11:29         ` Christian Borntraeger
2016-11-16 11:29         ` Christian Borntraeger
2016-11-16 11:29         ` Christian Borntraeger
2016-11-16 11:43           ` Peter Zijlstra
2016-11-16 11:43             ` Peter Zijlstra
2016-11-16 11:43           ` Peter Zijlstra
2016-11-17  5:16         ` Pan Xinhui
2016-11-17  5:16           ` Pan Xinhui
2016-11-17  5:16         ` Pan Xinhui
2016-11-16 10:23       ` Peter Zijlstra
2016-11-16  4:19     ` Pan Xinhui
2016-11-15 15:47   ` Peter Zijlstra
2016-11-22 12:33   ` [tip:locking/core] locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM and Xen guests tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 07/11] KVM: Introduce kvm_write_guest_offset_cached Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-22 12:33   ` [tip:locking/core] kvm: Introduce kvm_write_guest_offset_cached() tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-22 12:34   ` [tip:locking/core] x86/kvm: Support the vCPU preemption check tip-bot for Pan Xinhui
2016-12-19 11:42   ` [PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check Andrea Arcangeli
2016-12-19 11:42     ` Andrea Arcangeli
2016-12-19 11:42     ` [Qemu-devel] " Andrea Arcangeli
2016-12-19 11:42     ` Andrea Arcangeli
2016-12-19 13:56     ` Pan Xinhui
2016-12-19 13:56       ` [Qemu-devel] " Pan Xinhui
2016-12-19 13:56       ` Pan Xinhui
2016-12-19 14:39       ` Paolo Bonzini
2016-12-19 14:39         ` [Qemu-devel] " Paolo Bonzini
2016-12-19 14:39         ` Paolo Bonzini
2016-11-02  9:08 ` Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 09/11] x86, kernel/kvm.c: " Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-22 12:34   ` [tip:locking/core] x86/kvm: Support the vCPU preemption check tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 10/11] x86, xen: support vcpu preempted check Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui
2016-11-02  9:08   ` Pan Xinhui
2016-11-22 12:35   ` [tip:locking/core] x86/xen: Support the vCPU preemption check tip-bot for Juergen Gross
2016-11-02  9:08 ` [PATCH v7 11/11] Documentation: virtual: kvm: Support vcpu preempted check Pan Xinhui
2016-11-22 12:35   ` [tip:locking/core] Documentation/virtual/kvm: Support the vCPU preemption check tip-bot for Pan Xinhui
2016-11-02  9:08 ` [PATCH v7 11/11] Documentation: virtual: kvm: Support vcpu preempted check Pan Xinhui
2016-11-02  9:08 ` Pan Xinhui

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1478077718-37424-4-git-send-email-xinhui.pan@linux.vnet.ibm.com \
    --to=xinhui.pan@linux.vnet.ibm.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=benh@kernel.crashing.org \
    --cc=boqun.feng@gmail.com \
    --cc=borntraeger@de.ibm.com \
    --cc=bsingharora@gmail.com \
    --cc=dave@stgolabs.net \
    --cc=jgross@suse.com \
    --cc=kernellwp@gmail.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@redhat.com \
    --cc=mpe@ellerman.id.au \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=paulus@samba.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rkrcmar@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    --cc=xen-devel-request@lists.xenproject.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.