From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A078C606B0 for ; Tue, 9 Jul 2019 09:59:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0734C2080C for ; Tue, 9 Jul 2019 09:59:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726679AbfGIJ7K (ORCPT ); Tue, 9 Jul 2019 05:59:10 -0400 Received: from foss.arm.com ([217.140.110.172]:40624 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726126AbfGIJ7K (ORCPT ); Tue, 9 Jul 2019 05:59:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3D0AE28; Tue, 9 Jul 2019 02:59:09 -0700 (PDT) Received: from [10.1.196.217] (unassigned-hostname.cambridge.arm.com [10.1.196.217]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 67F323F738; Tue, 9 Jul 2019 02:59:08 -0700 (PDT) Subject: Re: [PATCH 40/59] KVM: arm64: nv: Don't always start an S2 MMU search from the beginning To: Marc Zyngier , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: Andre Przywara , Dave Martin References: <20190621093843.220980-1-marc.zyngier@arm.com> <20190621093843.220980-41-marc.zyngier@arm.com> From: Alexandru Elisei Message-ID: Date: Tue, 9 Jul 2019 10:59:03 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190621093843.220980-41-marc.zyngier@arm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On 6/21/19 10:38 AM, Marc Zyngier wrote: > Starting a S2 MMU search from the beginning all the time means that > we're potentially nuking a useful context (like we'd potentially > have on a !VHE KVM guest). > > Instead, let's always start the search from the point *after* the > last allocated context. This should ensure that alternating between > two EL1 contexts will not result in nuking the whole S2 each time. > > lookup_s2_mmu now has a chance to provide a hit. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/include/asm/kvm_host.h | 1 + > arch/arm64/kvm/nested.c | 14 ++++++++++++-- > 2 files changed, 13 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index b71a7a237f95..b7c44adcdbf3 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -123,6 +123,7 @@ struct kvm_arch { > */ > struct kvm_s2_mmu *nested_mmus; > size_t nested_mmus_size; > + int nested_mmus_next; For consistency, shouldn't nested_mmus_next be zero initialized in kvm_init_nested (arch/arm64/kvm/nested.c), like nested_mmus and nested_mmus_size? Not a big deal either way, since struct kvm is allocated using vzalloc. > really > /* VTCR_EL2 value for this VM */ > u64 vtcr; > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c > index 09afafbdc8fe..214d59019935 100644 > --- a/arch/arm64/kvm/nested.c > +++ b/arch/arm64/kvm/nested.c > @@ -363,14 +363,24 @@ static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu) > if (s2_mmu) > goto out; > > - for (i = 0; i < kvm->arch.nested_mmus_size; i++) { > - s2_mmu = &kvm->arch.nested_mmus[i]; > + /* > + * Make sure we don't always search from the same point, or we > + * will always reuse a potentially active context, leaving > + * free contexts unused. > + */ > + for (i = kvm->arch.nested_mmus_next; > + i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next); > + i++) { > + s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size]; > > if (atomic_read(&s2_mmu->refcnt) == 0) > break; > } > BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */ > > + /* Set the scene for the next search */ > + kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size; > + > if (kvm_s2_mmu_valid(s2_mmu)) { > /* Clear the old state */ > kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(kvm)); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58E60C606DA for ; Tue, 9 Jul 2019 09:59:14 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id E6ABE2080C for ; Tue, 9 Jul 2019 09:59:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6ABE2080C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 4E0D94A543; Tue, 9 Jul 2019 05:59:13 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zh5gR+-soqHo; Tue, 9 Jul 2019 05:59:12 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0A86D4A540; Tue, 9 Jul 2019 05:59:12 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 430E34A53D for ; Tue, 9 Jul 2019 05:59:11 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id r4m8Im7L2ufV for ; Tue, 9 Jul 2019 05:59:09 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id C28B04A530 for ; Tue, 9 Jul 2019 05:59:09 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3D0AE28; Tue, 9 Jul 2019 02:59:09 -0700 (PDT) Received: from [10.1.196.217] (unassigned-hostname.cambridge.arm.com [10.1.196.217]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 67F323F738; Tue, 9 Jul 2019 02:59:08 -0700 (PDT) Subject: Re: [PATCH 40/59] KVM: arm64: nv: Don't always start an S2 MMU search from the beginning To: Marc Zyngier , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org References: <20190621093843.220980-1-marc.zyngier@arm.com> <20190621093843.220980-41-marc.zyngier@arm.com> From: Alexandru Elisei Message-ID: Date: Tue, 9 Jul 2019 10:59:03 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190621093843.220980-41-marc.zyngier@arm.com> Content-Language: en-US Cc: Andre Przywara , Dave Martin X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On 6/21/19 10:38 AM, Marc Zyngier wrote: > Starting a S2 MMU search from the beginning all the time means that > we're potentially nuking a useful context (like we'd potentially > have on a !VHE KVM guest). > > Instead, let's always start the search from the point *after* the > last allocated context. This should ensure that alternating between > two EL1 contexts will not result in nuking the whole S2 each time. > > lookup_s2_mmu now has a chance to provide a hit. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/include/asm/kvm_host.h | 1 + > arch/arm64/kvm/nested.c | 14 ++++++++++++-- > 2 files changed, 13 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index b71a7a237f95..b7c44adcdbf3 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -123,6 +123,7 @@ struct kvm_arch { > */ > struct kvm_s2_mmu *nested_mmus; > size_t nested_mmus_size; > + int nested_mmus_next; For consistency, shouldn't nested_mmus_next be zero initialized in kvm_init_nested (arch/arm64/kvm/nested.c), like nested_mmus and nested_mmus_size? Not a big deal either way, since struct kvm is allocated using vzalloc. > really > /* VTCR_EL2 value for this VM */ > u64 vtcr; > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c > index 09afafbdc8fe..214d59019935 100644 > --- a/arch/arm64/kvm/nested.c > +++ b/arch/arm64/kvm/nested.c > @@ -363,14 +363,24 @@ static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu) > if (s2_mmu) > goto out; > > - for (i = 0; i < kvm->arch.nested_mmus_size; i++) { > - s2_mmu = &kvm->arch.nested_mmus[i]; > + /* > + * Make sure we don't always search from the same point, or we > + * will always reuse a potentially active context, leaving > + * free contexts unused. > + */ > + for (i = kvm->arch.nested_mmus_next; > + i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next); > + i++) { > + s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size]; > > if (atomic_read(&s2_mmu->refcnt) == 0) > break; > } > BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */ > > + /* Set the scene for the next search */ > + kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size; > + > if (kvm_s2_mmu_valid(s2_mmu)) { > /* Clear the old state */ > kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(kvm)); _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9E11C606B0 for ; Tue, 9 Jul 2019 09:59:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B94BE2080C for ; Tue, 9 Jul 2019 09:59:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rLlUrUN9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B94BE2080C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:To:Subject:Reply-To:Content-ID:Content-Description :Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5lBES+XaJDlQujZBPzKLEn7Bk9z5J4oC5tqtFUVVaKI=; b=rLlUrUN90qY+nu dlOv15rfs4oNgx3zUBZlbYvGGwyr5BDi81Lqrjmy/8NezAWNIZR2BewaKvgkPZT2IFqM+xd4mP7j0 +3qcmKIw338T99hqE5iSnpTwDp/1mTSdZrxHpBlwLF4b7aGdINxETbH4IE7L2KnGUpRnqB7plPM9o INUT1uIamFtXZQtg8udz/9sgDzqdQNtHkxowDqQkRjB5/d4DMzYFOS+7+zFcf9+yhTNnMdGefZ1KH av1nQ7KkkWlk6cbNuMNCejdldPMhrXXQfZj6zM4TAGWXvvExH/+TBJg0ox3op3HosB0Q56kWVha+m skMorMgWCf3HwY9hjzEw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hkmu3-000535-1o; Tue, 09 Jul 2019 09:59:15 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hkmu0-00052H-Hh for linux-arm-kernel@lists.infradead.org; Tue, 09 Jul 2019 09:59:13 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3D0AE28; Tue, 9 Jul 2019 02:59:09 -0700 (PDT) Received: from [10.1.196.217] (unassigned-hostname.cambridge.arm.com [10.1.196.217]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 67F323F738; Tue, 9 Jul 2019 02:59:08 -0700 (PDT) Subject: Re: [PATCH 40/59] KVM: arm64: nv: Don't always start an S2 MMU search from the beginning To: Marc Zyngier , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org References: <20190621093843.220980-1-marc.zyngier@arm.com> <20190621093843.220980-41-marc.zyngier@arm.com> From: Alexandru Elisei Message-ID: Date: Tue, 9 Jul 2019 10:59:03 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190621093843.220980-41-marc.zyngier@arm.com> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190709_025912_627150_A62A9887 X-CRM114-Status: GOOD ( 19.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andre Przywara , Dave Martin Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 6/21/19 10:38 AM, Marc Zyngier wrote: > Starting a S2 MMU search from the beginning all the time means that > we're potentially nuking a useful context (like we'd potentially > have on a !VHE KVM guest). > > Instead, let's always start the search from the point *after* the > last allocated context. This should ensure that alternating between > two EL1 contexts will not result in nuking the whole S2 each time. > > lookup_s2_mmu now has a chance to provide a hit. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/include/asm/kvm_host.h | 1 + > arch/arm64/kvm/nested.c | 14 ++++++++++++-- > 2 files changed, 13 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index b71a7a237f95..b7c44adcdbf3 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -123,6 +123,7 @@ struct kvm_arch { > */ > struct kvm_s2_mmu *nested_mmus; > size_t nested_mmus_size; > + int nested_mmus_next; For consistency, shouldn't nested_mmus_next be zero initialized in kvm_init_nested (arch/arm64/kvm/nested.c), like nested_mmus and nested_mmus_size? Not a big deal either way, since struct kvm is allocated using vzalloc. > really > /* VTCR_EL2 value for this VM */ > u64 vtcr; > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c > index 09afafbdc8fe..214d59019935 100644 > --- a/arch/arm64/kvm/nested.c > +++ b/arch/arm64/kvm/nested.c > @@ -363,14 +363,24 @@ static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu) > if (s2_mmu) > goto out; > > - for (i = 0; i < kvm->arch.nested_mmus_size; i++) { > - s2_mmu = &kvm->arch.nested_mmus[i]; > + /* > + * Make sure we don't always search from the same point, or we > + * will always reuse a potentially active context, leaving > + * free contexts unused. > + */ > + for (i = kvm->arch.nested_mmus_next; > + i < (kvm->arch.nested_mmus_size + kvm->arch.nested_mmus_next); > + i++) { > + s2_mmu = &kvm->arch.nested_mmus[i % kvm->arch.nested_mmus_size]; > > if (atomic_read(&s2_mmu->refcnt) == 0) > break; > } > BUG_ON(atomic_read(&s2_mmu->refcnt)); /* We have struct MMUs to spare */ > > + /* Set the scene for the next search */ > + kvm->arch.nested_mmus_next = (i + 1) % kvm->arch.nested_mmus_size; > + > if (kvm_s2_mmu_valid(s2_mmu)) { > /* Clear the old state */ > kvm_unmap_stage2_range(s2_mmu, 0, kvm_phys_size(kvm)); _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel