From: "Nikunj A. Dadhania" <nikunj@linux.vnet.ibm.com>
To: peterz@infradead.org, mingo@elte.hu
Cc: jeremy@goop.org, mtosatti@redhat.com, kvm@vger.kernel.org,
x86@kernel.org, vatsa@linux.vnet.ibm.com,
linux-kernel@vger.kernel.org, avi@redhat.com, hpa@zytor.com
Subject: [RFC PATCH v1 0/5] KVM paravirt remote flush tlb
Date: Fri, 27 Apr 2012 21:53:02 +0530 [thread overview]
Message-ID: <20120427161727.27082.43096.stgit@abhimanyu> (raw)
Remote flushing api's does a busy wait which is fine in bare-metal
scenario. But with-in the guest, the vcpus might have been pre-empted
or blocked. In this scenario, the initator vcpu would end up
busy-waiting for a long amount of time.
This was discovered in our gang scheduling test and other way to solve
this is by para-virtualizing the flush_tlb_others_ipi.
This patch set implements para-virt flush tlbs making sure that it
does not wait for vcpus that are sleeping. And all the sleeping vcpus
flush the tlb on guest enter. Idea was discussed here:
https://lkml.org/lkml/2012/2/20/157
This patch depends on ticketlocks[1] and KVM Paravirt Spinlock patches[2]
Based to 3.4.0-rc4 (commit: af3a3ab2)
Here are the results from non-PLE hardware. Running ebizzy workload
inside the VMs. The table shows the normalized ebizzy score wrt to the
baseline.
Machine:
8CPU Intel Xeon, HT disabled, 64 bit VM(8vcpu, 1G RAM)
Gang pv_spin pv_flush pv_spin_flush
1VM 1.01 0.30 1.01 0.49
2VMs 7.07 0.53 0.91 4.04
4VMs 9.07 0.59 0.31 5.27
8VMs 9.99 1.58 0.48 7.65
Perf report from the guest VM:
Base:
41.25% [k] flush_tlb_others_ipi
41.21% [k] __bitmap_empty
7.66% [k] _raw_spin_unlock_irqrestore
3.07% [.] __memcpy_ssse3_back
1.20% [k] clear_page
gang:
22.92% [.] __memcpy_ssse3_back
15.46% [k] _raw_spin_unlock_irqrestore
9.82% [k] clear_page
6.35% [k] do_page_fault
4.57% [k] down_read_trylock
3.36% [k] __mem_cgroup_commit_charge
3.26% [k] __x2apic_send_IPI_mask
3.23% [k] up_read
2.87% [k] __bitmap_empty
2.78% [k] flush_tlb_others_ipi
pv_spin:
34.82% [k] __bitmap_empty
34.75% [k] flush_tlb_others_ipi
25.10% [k] _raw_spin_unlock_irqrestore
1.52% [.] __memcpy_ssse3_back
pv_flush:
37.34% [k] _raw_spin_unlock_irqrestore
18.26% [k] native_halt
11.58% [.] __memcpy_ssse3_back
4.83% [k] clear_page
3.68% [k] do_page_fault
pv_spin_flush:
71.13% [k] _raw_spin_unlock_irqrestore
8.89% [.] __memcpy_ssse3_back
4.68% [k] native_halt
3.92% [k] clear_page
2.31% [k] do_page_fault
So looking at the perf output for pv_flush and pv_spin_flush, in both
the cases all the flush_tlb_others_ipi is no more contending for the
cpu and relinquishing the cpu for progress.
Comments?
Regards
Nikunj
1. https://lkml.org/lkml/2012/4/19/335
2. https://lkml.org/lkml/2012/4/23/123
---
Nikunj A. Dadhania (5):
KVM Guest: Add VCPU running/pre-empted state for guest
KVM-HV: Add VCPU running/pre-empted state for guest
KVM: Add paravirt kvm_flush_tlb_others
KVM: export kvm_kick_vcpu for pv_flush
KVM: Introduce PV kick in flush tlb
arch/x86/include/asm/kvm_host.h | 7 ++++
arch/x86/include/asm/kvm_para.h | 11 ++++++
arch/x86/include/asm/tlbflush.h | 9 +++++
arch/x86/kernel/kvm.c | 52 +++++++++++++++++++++++++-----
arch/x86/kvm/cpuid.c | 1 +
arch/x86/kvm/x86.c | 50 ++++++++++++++++++++++++++++-
arch/x86/mm/tlb.c | 68 +++++++++++++++++++++++++++++++++++++++
7 files changed, 188 insertions(+), 10 deletions(-)
next reply other threads:[~2012-04-27 16:23 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-27 16:23 Nikunj A. Dadhania [this message]
2012-04-27 16:23 ` [RFC PATCH v1 1/5] KVM Guest: Add VCPU running/pre-empted state for guest Nikunj A. Dadhania
2012-05-01 1:03 ` Raghavendra K T
2012-05-01 3:25 ` Nikunj A Dadhania
2012-04-27 16:23 ` [RFC PATCH v1 2/5] KVM-HV: " Nikunj A. Dadhania
2012-04-27 16:24 ` [RFC PATCH v1 3/5] KVM: Add paravirt kvm_flush_tlb_others Nikunj A. Dadhania
2012-04-29 12:23 ` Avi Kivity
2012-05-01 3:34 ` Nikunj A Dadhania
2012-05-01 9:39 ` Peter Zijlstra
2012-05-01 10:47 ` Avi Kivity
2012-05-01 10:57 ` Peter Zijlstra
2012-05-01 10:59 ` Peter Zijlstra
2012-05-01 22:49 ` Jeremy Fitzhardinge
2012-05-03 14:09 ` Stefano Stabellini
2012-05-01 12:12 ` Avi Kivity
2012-05-01 14:59 ` Peter Zijlstra
2012-05-01 15:31 ` Avi Kivity
2012-05-01 15:36 ` Peter Zijlstra
2012-05-01 15:39 ` Avi Kivity
2012-05-01 15:42 ` Peter Zijlstra
2012-05-01 15:11 ` Peter Zijlstra
2012-05-01 15:33 ` Avi Kivity
2012-05-01 15:14 ` Peter Zijlstra
2012-05-01 15:36 ` Avi Kivity
2012-05-01 16:16 ` Peter Zijlstra
2012-05-01 16:43 ` Paul E. McKenney
2012-05-01 16:18 ` Peter Zijlstra
2012-05-01 16:20 ` Peter Zijlstra
2012-05-02 8:51 ` Nikunj A Dadhania
2012-05-02 10:20 ` Peter Zijlstra
2012-05-02 13:53 ` Nikunj A Dadhania
2012-05-04 4:32 ` Nikunj A Dadhania
2012-05-04 11:44 ` Srivatsa Vaddagiri
2012-05-07 3:10 ` Nikunj A Dadhania
2012-04-27 16:26 ` [RFC PATCH v1 4/5] KVM: get kvm_kick_vcpu out for pv_flush Nikunj A. Dadhania
2012-04-27 16:27 ` [RFC PATCH v1 5/5] KVM: Introduce PV kick in flush tlb Nikunj A. Dadhania
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120427161727.27082.43096.stgit@abhimanyu \
--to=nikunj@linux.vnet.ibm.com \
--cc=avi@redhat.com \
--cc=hpa@zytor.com \
--cc=jeremy@goop.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mtosatti@redhat.com \
--cc=peterz@infradead.org \
--cc=vatsa@linux.vnet.ibm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).