From: Mike Rapoport <email@example.com> To: Dave Hansen <firstname.lastname@example.org> Cc: Liran Alon <email@example.com>, "Kirill A. Shutemov" <firstname.lastname@example.org>, Dave Hansen <email@example.com>, Andy Lutomirski <firstname.lastname@example.org>, Peter Zijlstra <email@example.com>, Paolo Bonzini <firstname.lastname@example.org>, Sean Christopherson <email@example.com>, Vitaly Kuznetsov <firstname.lastname@example.org>, Wanpeng Li <email@example.com>, Jim Mattson <firstname.lastname@example.org>, Joerg Roedel <email@example.com>, David Rientjes <firstname.lastname@example.org>, Andrea Arcangeli <email@example.com>, Kees Cook <firstname.lastname@example.org>, Will Drewry <email@example.com>, "Edgecombe, Rick P" <firstname.lastname@example.org>, "Kleen, Andi" <email@example.com>, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, "Kirill A. Shutemov" <firstname.lastname@example.org> Subject: Re: [RFC 00/16] KVM protected memory extension Date: Thu, 28 May 2020 00:22:00 +0300 Message-ID: <20200527212200.GH48741@kernel.org> (raw) In-Reply-To: <email@example.com> On Wed, May 27, 2020 at 08:45:33AM -0700, Dave Hansen wrote: > On 5/26/20 4:38 AM, Mike Rapoport wrote: > > On Tue, May 26, 2020 at 01:16:14PM +0300, Liran Alon wrote: > >> On 26/05/2020 9:17, Mike Rapoport wrote: > >>> On Mon, May 25, 2020 at 04:47:18PM +0300, Liran Alon wrote: > >>>> On 22/05/2020 15:51, Kirill A. Shutemov wrote: > >>>> > >>> Out of curiosity, do we actually have some numbers for the "non-trivial > >>> performance cost"? For instance for KVM usecase? > >>> > >> Dig into XPFO mailing-list discussions to find out... > >> I just remember that this was one of the main concerns regarding XPFO. > > > > The XPFO benchmarks measure total XPFO cost, and huge share of it comes > > from TLB shootdowns. > > Yes, TLB shootdown when pages transition between owners is huge. The > XPFO folks did a lot of work to try to optimize some of this overhead > away. But, it's still a concern. > > The concern with XPFO was that it could affect *all* application page > allocation. This approach cheats a bit and only goes after guest VM > pages. It's significantly more work to allocate a page and map it into > a guest than it is to, for instance, allocate an anonymous user page. > That means that the *additional* overhead of things like this for guest > memory matter a lot less. > > > It's not exactly measurement of the imapct of the direct map > > fragmentation to workload running inside a vitrual machine. > > While the VM *itself* is running, there is zero overhead. The host > direct map is not used at *all*. The guest and host TLB entries share > the same space in the TLB so there could be some increased pressure on > the TLB, but that's a really secondary effect. It would also only occur > if the guest exits and the host runs and starts evicting TLB entries. > > The other effects I could think of would be when the guest exits and the > host is doing some work for the guest, like emulation or something. The > host would see worse TLB behavior because the host is using the > (fragmented) direct map. > > But, both of those things require VMEXITs. The more exits, the more > overhead you _might_ observe. What I've been hearing from KVM folks is > that exits are getting more and more rare and the hardware designers are > working hard to minimize them. Right, when guest stays in the guest mode, there is no overhead. But guests still exit sometimes and I was wondering if anybody had measured difference in the overhead with different page size used for the host's direct map. My guesstimate is that the overhead will not differ much for most workloads. But still, it's still interesting to *know* what is it. > That's especially good news because it means that even if the > situation > isn't perfect, it's only bound to get *better* over time, not worse. The processors have been aggressively improving performance for decades and see where are we know because of it ;-) -- Sincerely yours, Mike.
next prev parent reply index Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-05-22 12:51 Kirill A. Shutemov 2020-05-22 12:51 ` [RFC 01/16] x86/mm: Move force_dma_unencrypted() to common code Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 02/16] x86/kvm: Introduce KVM memory protection feature Kirill A. Shutemov 2020-05-25 14:58 ` Vitaly Kuznetsov 2020-05-25 15:15 ` Kirill A. Shutemov 2020-05-27 5:03 ` Sean Christopherson 2020-05-27 8:39 ` Vitaly Kuznetsov 2020-05-27 8:52 ` Sean Christopherson 2020-06-03 2:09 ` Huang, Kai 2020-06-03 11:14 ` Vitaly Kuznetsov 2020-05-22 12:52 ` [RFC 03/16] x86/kvm: Make DMA pages shared Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 04/16] x86/kvm: Use bounce buffers for KVM memory protection Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 06/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Kirill A. Shutemov 2020-05-25 15:08 ` Vitaly Kuznetsov 2020-05-25 15:17 ` Kirill A. Shutemov 2020-06-01 16:35 ` Paolo Bonzini 2020-06-02 13:33 ` Kirill A. Shutemov 2020-05-26 6:14 ` Mike Rapoport 2020-05-26 21:56 ` Kirill A. Shutemov 2020-05-29 15:24 ` Kees Cook 2020-05-22 12:52 ` [RFC 07/16] KVM: mm: Introduce VM_KVM_PROTECTED Kirill A. Shutemov 2020-05-26 6:15 ` Mike Rapoport 2020-05-26 22:01 ` Kirill A. Shutemov 2020-05-26 6:40 ` John Hubbard 2020-05-26 22:04 ` Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 08/16] KVM: x86: Use GUP for page walk instead of __get_user() Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 09/16] KVM: Protected memory extension Kirill A. Shutemov 2020-05-25 15:26 ` Vitaly Kuznetsov 2020-05-25 15:34 ` Kirill A. Shutemov 2020-06-03 1:34 ` Huang, Kai 2020-05-22 12:52 ` [RFC 10/16] KVM: x86: Enabled protected " Kirill A. Shutemov 2020-05-25 15:26 ` Vitaly Kuznetsov 2020-05-26 6:16 ` Mike Rapoport 2020-05-26 21:58 ` Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 11/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 12/16] x86/kvm: Share steal time page with host Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 13/16] x86/kvmclock: Share hvclock memory with the host Kirill A. Shutemov 2020-05-25 15:22 ` Vitaly Kuznetsov 2020-05-25 15:25 ` Kirill A. Shutemov 2020-05-25 15:42 ` Vitaly Kuznetsov 2020-05-22 12:52 ` [RFC 14/16] KVM: Introduce gfn_to_pfn_memslot_protected() Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 15/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Kirill A. Shutemov 2020-05-22 12:52 ` [RFC 16/16] KVM: Unmap protected pages from direct mapping Kirill A. Shutemov 2020-05-26 6:16 ` Mike Rapoport 2020-05-26 22:10 ` Kirill A. Shutemov 2020-05-25 5:27 ` [RFC 00/16] KVM protected memory extension Kirill A. Shutemov 2020-05-25 13:47 ` Liran Alon 2020-05-25 14:46 ` Kirill A. Shutemov 2020-05-25 15:56 ` Liran Alon 2020-05-26 6:17 ` Mike Rapoport 2020-05-26 10:16 ` Liran Alon 2020-05-26 11:38 ` Mike Rapoport 2020-05-27 15:45 ` Dave Hansen 2020-05-27 21:22 ` Mike Rapoport [this message] 2020-06-04 15:15 ` Marc Zyngier 2020-06-04 15:48 ` Sean Christopherson 2020-06-04 16:27 ` Marc Zyngier 2020-06-04 16:35 ` Will Deacon 2020-06-04 19:09 ` Nakajima, Jun 2020-06-04 21:03 ` Jim Mattson 2020-06-04 23:29 ` Nakajima, Jun
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200527212200.GH48741@kernel.org \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
KVM Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \ firstname.lastname@example.org public-inbox-index kvm Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.kvm AGPL code for this site: git clone https://public-inbox.org/public-inbox.git