From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3224C43334 for ; Tue, 7 Jun 2022 09:31:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231956AbiFGJbU (ORCPT ); Tue, 7 Jun 2022 05:31:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239608AbiFGJbM (ORCPT ); Tue, 7 Jun 2022 05:31:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 62AFEE64E6 for ; Tue, 7 Jun 2022 02:30:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654594256; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H2QEcF9zNfxK83VYiyTObfACqmNnbnuvwkQeHfTVQI8=; b=NDd7KAKfcRD0ejopRh9jlk7RhXNpPpJY5f8348GIhmw+pAO0mcwHGaQBPyjRC9g3ChHoPN LVzzDWLyjkkb0oMII54rfmj8yDYSjPc+v81EheWj08bCusoFjYNBcLQQwJYpWe77lcm3Gk X8LlyX3FoqvWCsRqkXGFeOWV1M9rHeI= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-530-dYQGiUZqM3i4GvFJugn0Cw-1; Tue, 07 Jun 2022 05:30:55 -0400 X-MC-Unique: dYQGiUZqM3i4GvFJugn0Cw-1 Received: by mail-qk1-f197.google.com with SMTP id i5-20020a05620a404500b006a6d7a765d9so1016670qko.7 for ; Tue, 07 Jun 2022 02:30:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=H2QEcF9zNfxK83VYiyTObfACqmNnbnuvwkQeHfTVQI8=; b=CaDDJ8GrW94O11RjbAjM02GePO5ZVsuyCG/ljvbC7XW8ve5e7Un7m8xclGNzvw8K1n 7AwmKDlxCuogd7HeE/INKbwwSbFl0Piaa21gPAxKDMSkhhEvsu8PmKNKDHznoJluF7xk WoovXKYX1/qiJ7cdJgHlJU6BzEroQU3nOn6v14IEjpjXykWi0wTgzEaAW/SlaShghHxk 7ub6HrMyiRYssrtLZWJC+EAwHwNDwuGJ6Hj+nO30ZTxNxFhsCwGXVEg9TEgmu5AoDbcU UXumaC5N/tsG57walaIUOgC0RfQbsLKEnA1M9+3xeMhxMEfmii3UJUshiCGOYu6JyO+z +EQw== X-Gm-Message-State: AOAM532vVr94Gy1oQ+YP20C8Ng9cWMXeYv+dSH6ncJwGg/s1jCX1NsUE /MN2O+HuH2f0798Skich/d906/5JAkMqOXCQyfzz7tfUKdm69RV4WIrXsJrn39kIhXMxKNPUG95 suP1fn7dmRqJb3IC0D8qImvXZ X-Received: by 2002:a05:6214:76e:b0:467:cf81:7f3e with SMTP id f14-20020a056214076e00b00467cf817f3emr19178854qvz.89.1654594254669; Tue, 07 Jun 2022 02:30:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/yPpEqaQDpRYGzCS2MyTcWsJ1tIH3tkk9Os89pUL7Clazk1lVE+iO+ak9RitX1LJgcVIa0g== X-Received: by 2002:a05:6214:76e:b0:467:cf81:7f3e with SMTP id f14-20020a056214076e00b00467cf817f3emr19178834qvz.89.1654594254405; Tue, 07 Jun 2022 02:30:54 -0700 (PDT) Received: from [10.35.4.238] (bzq-82-81-161-50.red.bezeqint.net. [82.81.161.50]) by smtp.gmail.com with ESMTPSA id hf22-20020a05622a609600b002f940d5ab2csm11116760qtb.74.2022.06.07.02.30.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 02:30:53 -0700 (PDT) Message-ID: <0a7cada4844181d50b7ca971af5d8a4731171336.camel@redhat.com> Subject: Re: [PATCH v6 05/38] KVM: x86: hyper-v: Handle HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls gently From: Maxim Levitsky To: Vitaly Kuznetsov , kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , Yuan Yao , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Date: Tue, 07 Jun 2022 12:30:50 +0300 In-Reply-To: <20220606083655.2014609-6-vkuznets@redhat.com> References: <20220606083655.2014609-1-vkuznets@redhat.com> <20220606083655.2014609-6-vkuznets@redhat.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.40.4 (3.40.4-2.fc34) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote: > Currently, HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls are handled > the exact same way as HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE{,EX}: by > flushing the whole VPID and this is sub-optimal. Switch to handling > these requests with 'flush_tlb_gva()' hooks instead. Use the newly > introduced TLB flush fifo to queue the requests. > > Signed-off-by: Vitaly Kuznetsov > --- >  arch/x86/kvm/hyperv.c | 100 +++++++++++++++++++++++++++++++++++++----- >  1 file changed, 88 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c > index 762b0b699fdf..956072592e2f 100644 > --- a/arch/x86/kvm/hyperv.c > +++ b/arch/x86/kvm/hyperv.c > @@ -1806,32 +1806,82 @@ static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc, >                                   sparse_banks, consumed_xmm_halves, offset); >  } >   > -static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu) > +static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[], > +                                       int consumed_xmm_halves, gpa_t offset) > +{ > +       return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, > +                                 entries, consumed_xmm_halves, offset); > +} > + > +static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count) >  { >         struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; >         struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); >         u64 entry = KVM_HV_TLB_FLUSHALL_ENTRY; > +       unsigned long flags; >   >         if (!hv_vcpu) >                 return; >   >         tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo; >   > -       kfifo_in_spinlocked(&tlb_flush_fifo->entries, &entry, 1, &tlb_flush_fifo->write_lock); > +       spin_lock_irqsave(&tlb_flush_fifo->write_lock, flags); > + > +       /* > +        * All entries should fit on the fifo leaving one free for 'flush all' > +        * entry in case another request comes in. In case there's not enough > +        * space, just put 'flush all' entry there. > +        */ > +       if (count && entries && count < kfifo_avail(&tlb_flush_fifo->entries)) { > +               WARN_ON(kfifo_in(&tlb_flush_fifo->entries, entries, count) != count); > +               goto out_unlock; > +       } > + > +       /* > +        * Note: full fifo always contains 'flush all' entry, no need to check the > +        * return value. > +        */ > +       kfifo_in(&tlb_flush_fifo->entries, &entry, 1); Very tiny nitpick: maybe call this flush_all_entry instead, just so that it is a tiny bit easier to notice. > + > +out_unlock: > +       spin_unlock_irqrestore(&tlb_flush_fifo->write_lock, flags); >  } >   >  void kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu) >  { >         struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; >         struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); > +       u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE]; > +       int i, j, count; > +       gva_t gva; >   > -       kvm_vcpu_flush_tlb_guest(vcpu); > - > -       if (!hv_vcpu) > +       if (!tdp_enabled || !hv_vcpu) { I haven't noticed that in the review I did back then, but any reason why !tdp_enabled? Just curious. > +               kvm_vcpu_flush_tlb_guest(vcpu); >                 return; > +       } >   >         tlb_flush_fifo = &hv_vcpu->tlb_flush_fifo; >   > +       count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE); > + > +       for (i = 0; i < count; i++) { > +               if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY) > +                       goto out_flush_all; > + > +               /* > +                * Lower 12 bits of 'address' encode the number of additional > +                * pages to flush. > +                */ > +               gva = entries[i] & PAGE_MASK; > +               for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) > +                       static_call(kvm_x86_flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE); > + > +               ++vcpu->stat.tlb_flush; > +       } > +       return; > + > +out_flush_all: > +       kvm_vcpu_flush_tlb_guest(vcpu); >         kfifo_reset_out(&tlb_flush_fifo->entries); >  } >   > @@ -1841,11 +1891,21 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) >         struct hv_tlb_flush_ex flush_ex; >         struct hv_tlb_flush flush; >         DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); > +       /* > +        * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE' > +        * entries on the TLB flush fifo. The last entry, however, needs to be > +        * always left free for 'flush all' entry which gets placed when > +        * there is not enough space to put all the requested entries. > +        */ > +       u64 __tlb_flush_entries[KVM_HV_TLB_FLUSH_FIFO_SIZE - 1]; > +       u64 *tlb_flush_entries; >         u64 valid_bank_mask; >         u64 sparse_banks[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; >         struct kvm_vcpu *v; >         unsigned long i; >         bool all_cpus; > +       int consumed_xmm_halves = 0; > +       gpa_t data_offset; >   >         /* >          * The Hyper-V TLFS doesn't allow more than 64 sparse banks, e.g. the > @@ -1861,10 +1921,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) >                         flush.address_space = hc->ingpa; >                         flush.flags = hc->outgpa; >                         flush.processor_mask = sse128_lo(hc->xmm[0]); > +                       consumed_xmm_halves = 1; >                 } else { >                         if (unlikely(kvm_read_guest(kvm, hc->ingpa, >                                                     &flush, sizeof(flush)))) >                                 return HV_STATUS_INVALID_HYPERCALL_INPUT; > +                       data_offset = sizeof(flush); >                 } >   >                 trace_kvm_hv_flush_tlb(flush.processor_mask, > @@ -1888,10 +1950,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) >                         flush_ex.flags = hc->outgpa; >                         memcpy(&flush_ex.hv_vp_set, >                                &hc->xmm[0], sizeof(hc->xmm[0])); > +                       consumed_xmm_halves = 2; >                 } else { >                         if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex, >                                                     sizeof(flush_ex)))) >                                 return HV_STATUS_INVALID_HYPERCALL_INPUT; > +                       data_offset = sizeof(flush_ex); >                 } >   >                 trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask, > @@ -1907,25 +1971,37 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) >                         return HV_STATUS_INVALID_HYPERCALL_INPUT; >   >                 if (all_cpus) > -                       goto do_flush; > +                       goto read_flush_entries; >   >                 if (!hc->var_cnt) >                         goto ret_success; >   > -               if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 2, > -                                         offsetof(struct hv_tlb_flush_ex, > -                                                  hv_vp_set.bank_contents))) > +               if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, consumed_xmm_halves, > +                                         data_offset)) > +                       return HV_STATUS_INVALID_HYPERCALL_INPUT; > +               data_offset += hc->var_cnt * sizeof(sparse_banks[0]); > +               consumed_xmm_halves += hc->var_cnt; > +       } > + > +read_flush_entries: > +       if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE || > +           hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX || > +           hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) { > +               tlb_flush_entries = NULL; > +       } else { > +               if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries, > +                                               consumed_xmm_halves, data_offset)) >                         return HV_STATUS_INVALID_HYPERCALL_INPUT; > +               tlb_flush_entries = __tlb_flush_entries; >         } >   > -do_flush: >         /* >          * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't >          * analyze it here, flush TLB regardless of the specified address space. >          */ >         if (all_cpus) { >                 kvm_for_each_vcpu(i, v, kvm) > -                       hv_tlb_flush_enqueue(v); > +                       hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt); >   >                 kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH); >         } else { > @@ -1935,7 +2011,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) >                         v = kvm_get_vcpu(kvm, i); >                         if (!v) >                                 continue; > -                       hv_tlb_flush_enqueue(v); > +                       hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt); >                 } >   >                 kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask); Besides the nitpick, dont see anything wrong, but I might have missed something. Best regards, Maxim Levitsky