From: Waiman Long <Waiman.Long@hp.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org> Cc: linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini <paolo.bonzini@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>, Rik van Riel <riel@redhat.com>, Linus Torvalds <torvalds@linux-foundation.org>, Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, David Vrabel <david.vrabel@citrix.com>, Oleg Nesterov <oleg@redhat.com>, Scott J Norton <scott.norton@hp.com>, Douglas Hatch <doug.hatch@hp.com>, Waiman Long <Waiman.Long@hp.com> Subject: [PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support Date: Thu, 16 Oct 2014 14:10:29 -0400 [thread overview] Message-ID: <1413483040-58399-1-git-send-email-Waiman.Long@hp.com> (raw) v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo Bonzini. - The pvqspinlock code is largely from my previous version with PeterZ's way of going from queue tail to head and his idea of using callee saved calls to KVM and XEN codes. v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the feedback comments. v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843@infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU. v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue head stay alive as much as possible. v3->v4: - Remove debugging code and fix a configuration error - Simplify the qspinlock structure and streamline the code to make it perform a bit better - Add an x86 version of asm/qspinlock.h for holding x86 specific optimization. - Add an optimized x86 code path for 2 contending tasks to improve low contention performance. v2->v3: - Simplify the code by using numerous mode only without an unfair option. - Use the latest smp_load_acquire()/smp_store_release() barriers. - Move the queue spinlock code to kernel/locking. - Make the use of queue spinlock the default for x86-64 without user configuration. - Additional performance tuning. v1->v2: - Add some more comments to document what the code does. - Add a numerous CPU mode to support >= 16K CPUs - Add a configuration option to allow lock stealing which can further improve performance in many cases. - Enable wakeup of queue head CPU at unlock time for non-numerous CPU mode. This patch set has 3 different sections: 1) Patches 1-6: Introduces a queue-based spinlock implementation that can replace the default ticket spinlock without increasing the size of the spinlock data structure. As a result, critical kernel data structures that embed spinlock won't increase in size and break data alignments. 2) Patch 7: Enables the use of unfair lock in a virtual guest. This can resolve some of the locking related performance issues due to the fact that the next CPU to get the lock may have been scheduled out for a period of time. 3) Patches 8-11: Enable qspinlock para-virtualization support by halting the waiting CPUs after spinning for a certain amount of time. The unlock code will detect the a sleeping waiter and wake it up. This is essentially the same logic as the PV ticketlock code. The queue spinlock has slightly better performance than the ticket spinlock in uncontended case. Its performance can be much better with moderate to heavy contention. This patch has the potential of improving the performance of all the workloads that have moderate to heavy spinlock contention. The queue spinlock is especially suitable for NUMA machines with at least 2 sockets. Though even at the 2-socket level, there can be significant speedup depending on the workload. I got report that the queue spinlock patch can improve the performance of an I/O and interrupt intensive stress test with a lot of spinlock contention on a 2-socket system by up to 20%. The purpose of this patch set is not to solve any particular spinlock contention problems. Those need to be solved by refactoring the code to make more efficient use of the lock or finer granularity ones. The main purpose is to make the lock contention problems more tolerable until someone can spend the time and effort to fix them. Peter Zijlstra (3): qspinlock: Add pending bit qspinlock: Optimize for smaller NR_CPUS qspinlock: Revert to test-and-set on hypervisors Waiman Long (8): qspinlock: A simple generic 4-byte queue spinlock qspinlock, x86: Enable x86-64 to use queue spinlock qspinlock: Extract out code snippets for the next patch qspinlock: Use a simple write to grab the lock qspinlock, x86: Rename paravirt_ticketlocks_enabled pvqspinlock, x86: Add para-virtualization support pvqspinlock, x86: Enable PV qspinlock for KVM pvqspinlock, x86: Enable PV qspinlock for XEN arch/x86/Kconfig | 1 + arch/x86/include/asm/paravirt.h | 20 ++ arch/x86/include/asm/paravirt_types.h | 20 ++ arch/x86/include/asm/pvqspinlock.h | 403 ++++++++++++++++++++++++++++ arch/x86/include/asm/qspinlock.h | 77 ++++++ arch/x86/include/asm/spinlock.h | 9 +- arch/x86/include/asm/spinlock_types.h | 4 + arch/x86/kernel/kvm.c | 140 ++++++++++- arch/x86/kernel/paravirt-spinlocks.c | 10 +- arch/x86/xen/spinlock.c | 151 ++++++++++- include/asm-generic/qspinlock.h | 125 +++++++++ include/asm-generic/qspinlock_types.h | 79 ++++++ kernel/Kconfig.locks | 7 + kernel/locking/Makefile | 1 + kernel/locking/mcs_spinlock.h | 1 + kernel/locking/qspinlock.c | 476 +++++++++++++++++++++++++++++++++ 16 files changed, 1512 insertions(+), 12 deletions(-) create mode 100644 arch/x86/include/asm/pvqspinlock.h create mode 100644 arch/x86/include/asm/qspinlock.h create mode 100644 include/asm-generic/qspinlock.h create mode 100644 include/asm-generic/qspinlock_types.h create mode 100644 kernel/locking/qspinlock.c
WARNING: multiple messages have this Message-ID (diff)
From: Waiman Long <Waiman.Long@hp.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org> Cc: linux-arch@vger.kernel.org, Waiman Long <Waiman.Long@hp.com>, Rik van Riel <riel@redhat.com>, Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, kvm@vger.kernel.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Scott J Norton <scott.norton@hp.com>, x86@kernel.org, Paolo Bonzini <paolo.bonzini@gmail.com>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, David Vrabel <david.vrabel@citrix.com>, Oleg Nesterov <oleg@redhat.com>, xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>, "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>, Linus Torvalds <torvalds@linux-foundation.org>, Douglas Hatch <doug.hatch@hp.com> Subject: [PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support Date: Thu, 16 Oct 2014 14:10:29 -0400 [thread overview] Message-ID: <1413483040-58399-1-git-send-email-Waiman.Long@hp.com> (raw) v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo Bonzini. - The pvqspinlock code is largely from my previous version with PeterZ's way of going from queue tail to head and his idea of using callee saved calls to KVM and XEN codes. v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the feedback comments. v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843@infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU. v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue head stay alive as much as possible. v3->v4: - Remove debugging code and fix a configuration error - Simplify the qspinlock structure and streamline the code to make it perform a bit better - Add an x86 version of asm/qspinlock.h for holding x86 specific optimization. - Add an optimized x86 code path for 2 contending tasks to improve low contention performance. v2->v3: - Simplify the code by using numerous mode only without an unfair option. - Use the latest smp_load_acquire()/smp_store_release() barriers. - Move the queue spinlock code to kernel/locking. - Make the use of queue spinlock the default for x86-64 without user configuration. - Additional performance tuning. v1->v2: - Add some more comments to document what the code does. - Add a numerous CPU mode to support >= 16K CPUs - Add a configuration option to allow lock stealing which can further improve performance in many cases. - Enable wakeup of queue head CPU at unlock time for non-numerous CPU mode. This patch set has 3 different sections: 1) Patches 1-6: Introduces a queue-based spinlock implementation that can replace the default ticket spinlock without increasing the size of the spinlock data structure. As a result, critical kernel data structures that embed spinlock won't increase in size and break data alignments. 2) Patch 7: Enables the use of unfair lock in a virtual guest. This can resolve some of the locking related performance issues due to the fact that the next CPU to get the lock may have been scheduled out for a period of time. 3) Patches 8-11: Enable qspinlock para-virtualization support by halting the waiting CPUs after spinning for a certain amount of time. The unlock code will detect the a sleeping waiter and wake it up. This is essentially the same logic as the PV ticketlock code. The queue spinlock has slightly better performance than the ticket spinlock in uncontended case. Its performance can be much better with moderate to heavy contention. This patch has the potential of improving the performance of all the workloads that have moderate to heavy spinlock contention. The queue spinlock is especially suitable for NUMA machines with at least 2 sockets. Though even at the 2-socket level, there can be significant speedup depending on the workload. I got report that the queue spinlock patch can improve the performance of an I/O and interrupt intensive stress test with a lot of spinlock contention on a 2-socket system by up to 20%. The purpose of this patch set is not to solve any particular spinlock contention problems. Those need to be solved by refactoring the code to make more efficient use of the lock or finer granularity ones. The main purpose is to make the lock contention problems more tolerable until someone can spend the time and effort to fix them. Peter Zijlstra (3): qspinlock: Add pending bit qspinlock: Optimize for smaller NR_CPUS qspinlock: Revert to test-and-set on hypervisors Waiman Long (8): qspinlock: A simple generic 4-byte queue spinlock qspinlock, x86: Enable x86-64 to use queue spinlock qspinlock: Extract out code snippets for the next patch qspinlock: Use a simple write to grab the lock qspinlock, x86: Rename paravirt_ticketlocks_enabled pvqspinlock, x86: Add para-virtualization support pvqspinlock, x86: Enable PV qspinlock for KVM pvqspinlock, x86: Enable PV qspinlock for XEN arch/x86/Kconfig | 1 + arch/x86/include/asm/paravirt.h | 20 ++ arch/x86/include/asm/paravirt_types.h | 20 ++ arch/x86/include/asm/pvqspinlock.h | 403 ++++++++++++++++++++++++++++ arch/x86/include/asm/qspinlock.h | 77 ++++++ arch/x86/include/asm/spinlock.h | 9 +- arch/x86/include/asm/spinlock_types.h | 4 + arch/x86/kernel/kvm.c | 140 ++++++++++- arch/x86/kernel/paravirt-spinlocks.c | 10 +- arch/x86/xen/spinlock.c | 151 ++++++++++- include/asm-generic/qspinlock.h | 125 +++++++++ include/asm-generic/qspinlock_types.h | 79 ++++++ kernel/Kconfig.locks | 7 + kernel/locking/Makefile | 1 + kernel/locking/mcs_spinlock.h | 1 + kernel/locking/qspinlock.c | 476 +++++++++++++++++++++++++++++++++ 16 files changed, 1512 insertions(+), 12 deletions(-) create mode 100644 arch/x86/include/asm/pvqspinlock.h create mode 100644 arch/x86/include/asm/qspinlock.h create mode 100644 include/asm-generic/qspinlock.h create mode 100644 include/asm-generic/qspinlock_types.h create mode 100644 kernel/locking/qspinlock.c
next reply other threads:[~2014-10-16 18:10 UTC|newest] Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-10-16 18:10 Waiman Long [this message] 2014-10-16 18:10 ` [PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support Waiman Long 2014-10-16 18:10 ` [PATCH v12 01/11] qspinlock: A simple generic 4-byte queue spinlock Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 02/11] qspinlock, x86: Enable x86-64 to use " Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 03/11] qspinlock: Add pending bit Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 04/11] qspinlock: Extract out code snippets for the next patch Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 05/11] qspinlock: Optimize for smaller NR_CPUS Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 06/11] qspinlock: Use a simple write to grab the lock Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 07/11] qspinlock: Revert to test-and-set on hypervisors Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 08/11] qspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-24 8:47 ` Peter Zijlstra 2014-10-24 8:47 ` Peter Zijlstra 2014-10-24 8:47 ` Peter Zijlstra 2014-10-24 20:53 ` Waiman Long 2014-10-24 20:53 ` Waiman Long 2014-10-24 22:04 ` Peter Zijlstra 2014-10-24 22:04 ` Peter Zijlstra 2014-10-25 4:30 ` Mike Galbraith 2014-10-25 4:30 ` Mike Galbraith 2014-10-25 4:30 ` Mike Galbraith 2014-10-27 17:15 ` Waiman Long 2014-10-27 17:15 ` Waiman Long 2014-10-27 17:15 ` Waiman Long 2014-10-27 17:27 ` Peter Zijlstra 2014-10-27 20:50 ` Waiman Long 2014-10-27 20:50 ` Waiman Long 2014-10-27 20:50 ` Waiman Long 2014-10-27 17:27 ` Peter Zijlstra 2014-10-27 17:27 ` Peter Zijlstra 2014-10-24 22:04 ` Peter Zijlstra 2014-10-24 20:53 ` Waiman Long 2014-10-24 8:54 ` Peter Zijlstra 2014-10-24 8:54 ` Peter Zijlstra 2014-10-27 17:38 ` Waiman Long 2014-10-27 17:38 ` Waiman Long 2014-10-27 18:02 ` Konrad Rzeszutek Wilk 2014-10-27 18:02 ` Konrad Rzeszutek Wilk 2014-10-27 20:55 ` Waiman Long 2014-10-27 20:55 ` Waiman Long 2014-10-27 20:55 ` Waiman Long 2014-11-26 0:33 ` Waiman Long 2014-11-26 0:33 ` Waiman Long 2014-11-26 0:33 ` Waiman Long 2014-12-01 16:51 ` Konrad Rzeszutek Wilk 2014-12-01 16:51 ` Konrad Rzeszutek Wilk 2014-12-01 16:51 ` Konrad Rzeszutek Wilk 2014-10-27 18:02 ` Konrad Rzeszutek Wilk 2014-10-27 18:04 ` Peter Zijlstra 2014-10-27 18:04 ` Peter Zijlstra 2014-10-27 18:04 ` Peter Zijlstra 2014-10-27 21:22 ` Waiman Long 2014-10-27 21:22 ` Waiman Long 2014-10-27 21:22 ` Waiman Long 2014-10-29 19:05 ` Waiman Long 2014-10-29 19:05 ` Waiman Long 2014-10-29 19:05 ` Waiman Long 2014-10-29 20:25 ` Waiman Long 2014-10-29 20:25 ` Waiman Long 2014-10-29 20:25 ` Waiman Long 2014-10-27 17:38 ` Waiman Long 2014-10-24 8:54 ` Peter Zijlstra 2014-10-16 18:10 ` [PATCH v12 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` [PATCH v12 11/11] pvqspinlock, x86: Enable PV qspinlock for XEN Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-16 18:10 ` Waiman Long 2014-10-24 8:57 ` [PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support Peter Zijlstra 2014-10-24 8:57 ` Peter Zijlstra 2014-10-24 8:57 ` Peter Zijlstra 2014-10-27 18:00 ` Waiman Long 2014-10-27 18:00 ` Waiman Long 2014-10-27 18:00 ` Waiman Long -- strict thread matches above, loose matches on Subject: below -- 2014-10-16 18:10 Waiman Long
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1413483040-58399-1-git-send-email-Waiman.Long@hp.com \ --to=waiman.long@hp.com \ --cc=boris.ostrovsky@oracle.com \ --cc=david.vrabel@citrix.com \ --cc=doug.hatch@hp.com \ --cc=hpa@zytor.com \ --cc=konrad.wilk@oracle.com \ --cc=kvm@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@redhat.com \ --cc=oleg@redhat.com \ --cc=paolo.bonzini@gmail.com \ --cc=paulmck@linux.vnet.ibm.com \ --cc=peterz@infradead.org \ --cc=raghavendra.kt@linux.vnet.ibm.com \ --cc=riel@redhat.com \ --cc=scott.norton@hp.com \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ --cc=virtualization@lists.linux-foundation.org \ --cc=x86@kernel.org \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.