From: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org,
linuxppc-dev@lists.ozlabs.org
Cc: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au,
peterz@infradead.org, mingo@redhat.com,
paulmck@linux.vnet.ibm.com, waiman.long@hpe.com, mnipxh@163.com,
boqun.feng@gmail.com, Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Subject: [PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
Date: Tue, 17 May 2016 15:49:41 +0800 [thread overview]
Message-ID: <1463471387-5115-1-git-send-email-xinhui.pan@linux.vnet.ibm.com> (raw)
change fome v1:
separate into 6 pathes from one patch
some minor code changes.
benchmark test results are below.
run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory
perf bench futex hash
perf bench futex lock-pi
perf record -advRT || perf bench sched messaging -g 1000 || perf report
summary:
_____test________________spinlcok______________pv-qspinlcok_____
|futex hash | 556370 ops | 629634 ops |
|futex lock-pi | 362 ops | 367 ops |
|workloads | 23% | 13% |
details:
spinlock:
test 1)
# Running futex/hash benchmark...
Run summary [PID 9962]: 32 threads, each operating on 1024 [private] futexes for 10 secs.
Averaged 556370 operations/sec (+- 0.61%), total secs = 10
test 2)
# Running futex/lock-pi benchmark...
Run summary [PID 9962]: 32 threads doing pi lock/unlock pairing for 10 secs.
Averaged 362 operations/sec (+- 0.00%), total secs = 10
test 3)
perf bench sched messaging -g 1000
perf record -avdRT and perf report
# Samples: 2M of event 'cycles:ppp'
# Event count (approx.): 2045582241213
#
# Overhead Command Shared Object Symbol
# ........ ............... ................... ........................................................
#
22.96% sched-messaging [kernel.kallsyms] [k] _raw_spin_lock_irqsave
19.76% sched-messaging [kernel.kallsyms] [k] __spin_yield
2.09% sched-messaging [kernel.kallsyms] [k] __slab_free
2.07% sched-messaging [kernel.kallsyms] [k] unix_stream_read_generic
pv-qspinlock:
test 1)
# Running futex/hash benchmark...
Run summary [PID 3219]: 32 threads, each operating on 1024 [private] futexes for 10 secs.
Averaged 629634 operations/sec (+- 0.38%), total secs = 10
test 2)
# Running futex/lock-pi benchmark...
Run summary [PID 3219]: 32 threads doing pi lock/unlock pairing for 10 secs.
Averaged 367 operations/sec (+- 0.00%), total secs = 10
test 3)
perf bench sched messaging -g 1000
perf record -avdRT and perf report
# Samples: 1M of event 'cycles:ppp'
# Event count (approx.): 1250040606393
#
# Overhead Command Shared Object Symbol
# ........ ............... .............................. ........................................................
#
9.87% sched-messaging [kernel.vmlinux] [k] __pv_queued_spin_lock_slowpath
3.66% sched-messaging [kernel.vmlinux] [k] __pv_queued_spin_unlock
3.37% sched-messaging [kernel.vmlinux] [k] __slab_free
3.06% sched-messaging [kernel.vmlinux] [k] unix_stream_read_generic
Pan Xinhui (6):
qspinlock: powerpc support qspinlock
powerpc: pseries/Kconfig: qspinlock build config
powerpc: lib/locks.c: cpu yield/wake helper function
pv-qspinlock: powerpc support pv-qspinlock
pv-qspinlock: use cmpxchg_release in __pv_queued_spin_unlock
powerpc: pseries: pv-qspinlock build config/make
arch/powerpc/include/asm/qspinlock.h | 39 +++++++++++++++++++
arch/powerpc/include/asm/qspinlock_paravirt.h | 38 +++++++++++++++++++
.../powerpc/include/asm/qspinlock_paravirt_types.h | 13 +++++++
arch/powerpc/include/asm/spinlock.h | 31 +++++++++------
arch/powerpc/include/asm/spinlock_types.h | 4 ++
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/kernel/paravirt.c | 44 ++++++++++++++++++++++
arch/powerpc/lib/locks.c | 36 ++++++++++++++++++
arch/powerpc/platforms/pseries/Kconfig | 9 +++++
arch/powerpc/platforms/pseries/setup.c | 5 +++
kernel/locking/qspinlock_paravirt.h | 2 +-
11 files changed, 209 insertions(+), 13 deletions(-)
create mode 100644 arch/powerpc/include/asm/qspinlock.h
create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h
create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt_types.h
create mode 100644 arch/powerpc/kernel/paravirt.c
--
1.9.1
next reply other threads:[~2016-05-17 7:50 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-17 7:49 Pan Xinhui [this message]
2016-05-17 7:49 ` [PATCH v2 1/6] qspinlock: powerpc support qspinlock Pan Xinhui
2016-05-17 7:49 ` [PATCH v2 2/6] powerpc: pseries/Kconfig: qspinlock build config Pan Xinhui
2016-05-17 7:49 ` [PATCH v2 3/6] powerpc: lib/locks.c: cpu yield/wake helper function Pan Xinhui
2016-05-17 7:49 ` [PATCH v2 4/6] pv-qspinlock: powerpc support pv-qspinlock Pan Xinhui
2016-05-17 7:49 ` [PATCH v2 5/6] pv-qspinlock: use cmpxchg_release in __pv_queued_spin_unlock Pan Xinhui
2016-05-17 7:49 ` [PATCH v2 6/6] powerpc: pseries: pv-qspinlock build config/make Pan Xinhui
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1463471387-5115-1-git-send-email-xinhui.pan@linux.vnet.ibm.com \
--to=xinhui.pan@linux.vnet.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=boqun.feng@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mingo@redhat.com \
--cc=mnipxh@163.com \
--cc=mpe@ellerman.id.au \
--cc=paulmck@linux.vnet.ibm.com \
--cc=paulus@samba.org \
--cc=peterz@infradead.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=waiman.long@hpe.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).