All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Nikunj A. Dadhania" <nikunj@amd.com>
To: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
Cc: Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Brijesh Singh <brijesh.singh@amd.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Peter Gonda <pgonda@google.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Bharata B Rao <bharata@amd.com>
Subject: Re: [RFC PATCH 6/6] KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data()
Date: Wed, 19 Jan 2022 12:03:34 +0530	[thread overview]
Message-ID: <0e523405-f52c-b152-1dd3-aa65a9caee3c@amd.com> (raw)
In-Reply-To: <010ef70c-31a2-2831-a2a7-950db14baf23@maciej.szmigiero.name>

Hi Maciej,

On 1/18/2022 8:30 PM, Maciej S. Szmigiero wrote:
> Hi Nikunj,
> 
> On 18.01.2022 12:06, Nikunj A Dadhania wrote:
>> From: Sean Christopherson <sean.j.christopherson@intel.com>
>>
>> Pin the memory for the data being passed to launch_update_data()
>> because it gets encrypted before the guest is first run and must
>> not be moved which would corrupt it.
>>
>> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
>> [ * Changed hva_to_gva() to take an extra argument and return gpa_t.
>>    * Updated sev_pin_memory_in_mmu() error handling.
>>    * As pinning/unpining pages is handled within MMU, removed
>>      {get,put}_user(). ]
>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>> ---
>>   arch/x86/kvm/svm/sev.c | 122 ++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 119 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
>> index 14aeccfc500b..1ae714e83a3c 100644
>> --- a/arch/x86/kvm/svm/sev.c
>> +++ b/arch/x86/kvm/svm/sev.c
>> @@ -22,6 +22,7 @@
>>   #include <asm/trapnr.h>
>>   #include <asm/fpu/xcr.h>
>>   +#include "mmu.h"
>>   #include "x86.h"
>>   #include "svm.h"
>>   #include "svm_ops.h"
>> @@ -490,6 +491,110 @@ static unsigned long get_num_contig_pages(unsigned long idx,
>>       return pages;
>>   }
>>   +#define SEV_PFERR_RO (PFERR_USER_MASK)
>> +#define SEV_PFERR_RW (PFERR_WRITE_MASK | PFERR_USER_MASK)
>> +
>> +static struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm,
>> +                          unsigned long hva)
>> +{
>> +    struct kvm_memslots *slots = kvm_memslots(kvm);
>> +    struct kvm_memory_slot *memslot;
>> +    int bkt;
>> +
>> +    kvm_for_each_memslot(memslot, bkt, slots) {
>> +        if (hva >= memslot->userspace_addr &&
>> +            hva < memslot->userspace_addr +
>> +            (memslot->npages << PAGE_SHIFT))
>> +            return memslot;
>> +    }
>> +
>> +    return NULL;
>> +}
> 
> We have kvm_for_each_memslot_in_hva_range() now, please don't do a linear
> search through memslots.
> You might need to move the aforementioned macro from kvm_main.c to some
> header file, though.

Sure, let me try optimizing with this newly added macro.

> 
>> +static gpa_t hva_to_gpa(struct kvm *kvm, unsigned long hva, bool *ro)
>> +{
>> +    struct kvm_memory_slot *memslot;
>> +    gpa_t gpa_offset;
>> +
>> +    memslot = hva_to_memslot(kvm, hva);
>> +    if (!memslot)
>> +        return UNMAPPED_GVA;
>> +
>> +    *ro = !!(memslot->flags & KVM_MEM_READONLY);
>> +    gpa_offset = hva - memslot->userspace_addr;
>> +    return ((memslot->base_gfn << PAGE_SHIFT) + gpa_offset);
>> +}
>> +
>> +static struct page **sev_pin_memory_in_mmu(struct kvm *kvm, unsigned long addr,
>> +                       unsigned long size,
>> +                       unsigned long *npages)
>> +{
>> +    struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>> +    struct kvm_vcpu *vcpu;
>> +    struct page **pages;
>> +    unsigned long i;
>> +    u32 error_code;
>> +    kvm_pfn_t pfn;
>> +    int idx, ret = 0;
>> +    gpa_t gpa;
>> +    bool ro;
>> +
>> +    pages = sev_alloc_pages(sev, addr, size, npages);
>> +    if (IS_ERR(pages))
>> +        return pages;
>> +
>> +    vcpu = kvm_get_vcpu(kvm, 0);
>> +    if (mutex_lock_killable(&vcpu->mutex)) {
>> +        kvfree(pages);
>> +        return ERR_PTR(-EINTR);
>> +    }
>> +
>> +    vcpu_load(vcpu);
>> +    idx = srcu_read_lock(&kvm->srcu);
>> +
>> +    kvm_mmu_load(vcpu);
>> +
>> +    for (i = 0; i < *npages; i++, addr += PAGE_SIZE) {
>> +        if (signal_pending(current)) {
>> +            ret = -ERESTARTSYS;
>> +            break;
>> +        }
>> +
>> +        if (need_resched())
>> +            cond_resched();
>> +
>> +        gpa = hva_to_gpa(kvm, addr, &ro);
>> +        if (gpa == UNMAPPED_GVA) {
>> +            ret = -EFAULT;
>> +            break;
>> +        }
> 
> This function is going to have worst case O(n²) complexity if called with
> the whole VM memory (or O(n * log(n)) when hva_to_memslot() is modified
> to use kvm_for_each_memslot_in_hva_range()).

I understand your concern and will address it. BTW, this is called for a small 
fragment of VM memory( <10MB), that needs to be pinned before the guest execution 
starts.

> That's really bad for something that can be done in O(n) time - look how
> kvm_for_each_memslot_in_gfn_range() does it over gfns.
> 

I saw one use of kvm_for_each_memslot_in_gfn_range() in __kvm_zap_rmaps(), and 
that too calls slot_handle_level_range() which has a for_each_slot_rmap_range().
How would that be O(n) ?

kvm_for_each_memslot_in_gfn_range() {
	...
	slot_handle_level_range()
	...
}

slot_handle_level_range() {
	...
	for_each_slot_rmap_range() {
		...
	}
	...
}

Regards,
Nikunj

  parent reply	other threads:[~2022-01-19  6:34 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-18 11:06 [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 1/6] KVM: x86/mmu: Add hook to pin PFNs on demand in MMU Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 2/6] KVM: SVM: Add pinning metadata in the arch memslot Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 3/6] KVM: SVM: Implement demand page pinning Nikunj A Dadhania
2022-01-25 16:47   ` Peter Gonda
2022-01-25 17:49     ` Nikunj A. Dadhania
2022-01-25 17:59       ` Peter Gonda
2022-01-27 16:29         ` Nikunj A. Dadhania
2022-01-26 10:46   ` David Hildenbrand
2022-01-28  6:57     ` Nikunj A. Dadhania
2022-01-28  8:27       ` David Hildenbrand
2022-01-28 11:04         ` Nikunj A. Dadhania
2022-01-28 11:08           ` David Hildenbrand
2022-01-31 11:56             ` David Hildenbrand
2022-01-31 12:18               ` Nikunj A. Dadhania
2022-01-31 12:41                 ` David Hildenbrand
2022-03-06 19:48   ` Mingwei Zhang
2022-03-07  7:08     ` Nikunj A. Dadhania
2022-01-18 11:06 ` [RFC PATCH 4/6] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by SEV/TDX Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 5/6] KVM: SEV: Carve out routine for allocation of pages Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 6/6] KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data() Nikunj A Dadhania
2022-01-18 15:00   ` Maciej S. Szmigiero
2022-01-18 17:29     ` Maciej S. Szmigiero
2022-01-19 11:35       ` Nikunj A. Dadhania
2022-01-19  6:33     ` Nikunj A. Dadhania [this message]
2022-01-19 18:52       ` Maciej S. Szmigiero
2022-01-20  4:24         ` Nikunj A. Dadhania
2022-01-20 16:17   ` Peter Gonda
2022-01-21  4:08     ` Nikunj A. Dadhania
2022-01-21 16:00       ` Peter Gonda
2022-01-21 17:14         ` Nikunj A. Dadhania
2022-03-06 20:07 ` [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests Mingwei Zhang
2022-03-07 13:02   ` Nikunj A. Dadhania

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0e523405-f52c-b152-1dd3-aa65a9caee3c@amd.com \
    --to=nikunj@amd.com \
    --cc=bharata@amd.com \
    --cc=brijesh.singh@amd.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mail@maciej.szmigiero.name \
    --cc=pbonzini@redhat.com \
    --cc=pgonda@google.com \
    --cc=seanjc@google.com \
    --cc=thomas.lendacky@amd.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.