From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90B4DC07E9C for ; Tue, 6 Jul 2021 14:46:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7591561C21 for ; Tue, 6 Jul 2021 14:46:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231535AbhGFOt3 (ORCPT ); Tue, 6 Jul 2021 10:49:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:47178 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232006AbhGFOtZ (ORCPT ); Tue, 6 Jul 2021 10:49:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625582806; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lWtqXPxiZdBelo0k7U0AxewqoYZr1YwsdE2r7zVkq6s=; b=OHxB5hhjiy4IkGmquLzMFSHTVXRkAHKIW1ULsmN56wc7tDIwgt+9FJ1G5Ib+8qI32+73uY 1xAKO5c7cySOkWI+AP4s70W26j0cf4h/AY/O5fgVqJslSpSoU82A2eV0+OgqXrwR/FkPGp 3gVteKnqNkluPWdEj48FRe6UXmIj7MQ= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-246-PrBGS-WqNUWUl7MZVHwZFQ-1; Tue, 06 Jul 2021 10:46:45 -0400 X-MC-Unique: PrBGS-WqNUWUl7MZVHwZFQ-1 Received: by mail-wm1-f69.google.com with SMTP id d16-20020a1c73100000b02901f2d21e46efso1024010wmb.6 for ; Tue, 06 Jul 2021 07:46:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=lWtqXPxiZdBelo0k7U0AxewqoYZr1YwsdE2r7zVkq6s=; b=IRD/r5CN+fIgX2lycsA1OEjli097D3NS8rqiq/nGj+LXTI55NO/EFbXTUP0dLpvOJW MvMtt9OigGXO/ppcZeq1M5loJXJIARv5QOx04FWq2/wpw2IPfyAv2eSIMeuH7nW4BWCq 6oMyAkYoMs+uabqS6Ut0aa6RZsr/mrJ/grMtbwVK3dE307F9rRBFJqKrIl6NLOPuOxdU mS/rO+Ibo3BgXo8494z5O2RnzCfSbimdazsjFPOEFZyEnS4TviCKeqSypgpyhJac4dPF FSI0ceHqJ+SIwxNJSBL55DlCN/4A5sPPNiI5X+3XKh9FjjBYm/vPm+r0R2OXIe8I57gD PMeA== X-Gm-Message-State: AOAM533kT4bbu9ytJ1LdJojI+D7lldkde61cZjbWm9FuSo7+SDQS9Zgk OJ+KFg0fzTKAXjmylJzBVMst0EIxkxbhr5aqlqFBmtgSDJcZ4hhJIdHg389bK3AF6ZF2ppLmhiI 8SnGgfyx81Yhpj2dugou4hbfI X-Received: by 2002:adf:fd86:: with SMTP id d6mr22274811wrr.84.1625582803865; Tue, 06 Jul 2021 07:46:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyd52lQL+6ccoKNuRlEwY7DSiVSmUGs1Mj1QXO68jchzK65nR2PZk6JenmER6LagaRSwBqsEg== X-Received: by 2002:adf:fd86:: with SMTP id d6mr22274788wrr.84.1625582803708; Tue, 06 Jul 2021 07:46:43 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a]) by smtp.gmail.com with ESMTPSA id l10sm16351567wrt.49.2021.07.06.07.46.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 06 Jul 2021 07:46:42 -0700 (PDT) Subject: Re: [RFC PATCH v2 61/69] KVM: VMX: Move AR_BYTES encoder/decoder helpers to common.h To: isaku.yamahata@intel.com, Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , erdemaktas@google.com, Connor Kuehl , Sean Christopherson , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: isaku.yamahata@gmail.com, Sean Christopherson References: <847069aafe640a360007a4c531930e34945e6417.1625186503.git.isaku.yamahata@intel.com> From: Paolo Bonzini Message-ID: <1da13c0d-6cd5-cb8c-5e25-a08d7f816901@redhat.com> Date: Tue, 6 Jul 2021 16:46:41 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <847069aafe640a360007a4c531930e34945e6417.1625186503.git.isaku.yamahata@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/07/21 00:05, isaku.yamahata@intel.com wrote: > From: Sean Christopherson > > Move the AR_BYTES helpers to common.h so that future patches can reuse > them to decode/encode AR for TDX. > > Signed-off-by: Sean Christopherson > Signed-off-by: Isaku Yamahata > --- > arch/x86/kvm/vmx/common.h | 41 ++++++++++++++++++++++++++++++++++ > arch/x86/kvm/vmx/vmx.c | 47 ++++----------------------------------- > 2 files changed, 45 insertions(+), 43 deletions(-) > > diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h > index aa6a569b87d1..755aaec85199 100644 > --- a/arch/x86/kvm/vmx/common.h > +++ b/arch/x86/kvm/vmx/common.h > @@ -4,6 +4,7 @@ > > #include > > +#include > #include > #include > > @@ -119,4 +120,44 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa, > return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); > } > > +static inline u32 vmx_encode_ar_bytes(struct kvm_segment *var) > +{ > + u32 ar; > + > + if (var->unusable || !var->present) > + ar = 1 << 16; > + else { > + ar = var->type & 15; > + ar |= (var->s & 1) << 4; > + ar |= (var->dpl & 3) << 5; > + ar |= (var->present & 1) << 7; > + ar |= (var->avl & 1) << 12; > + ar |= (var->l & 1) << 13; > + ar |= (var->db & 1) << 14; > + ar |= (var->g & 1) << 15; > + } > + > + return ar; > +} > + > +static inline void vmx_decode_ar_bytes(u32 ar, struct kvm_segment *var) > +{ > + var->unusable = (ar >> 16) & 1; > + var->type = ar & 15; > + var->s = (ar >> 4) & 1; > + var->dpl = (ar >> 5) & 3; > + /* > + * Some userspaces do not preserve unusable property. Since usable > + * segment has to be present according to VMX spec we can use present > + * property to amend userspace bug by making unusable segment always > + * nonpresent. vmx_encode_ar_bytes() already marks nonpresent > + * segment as unusable. > + */ > + var->present = !var->unusable; > + var->avl = (ar >> 12) & 1; > + var->l = (ar >> 13) & 1; > + var->db = (ar >> 14) & 1; > + var->g = (ar >> 15) & 1; > +} > + > #endif /* __KVM_X86_VMX_COMMON_H */ > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 3c3bfc80d2bb..40843ca2fb33 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -365,8 +365,6 @@ static const struct kernel_param_ops vmentry_l1d_flush_ops = { > }; > module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644); > > -static u32 vmx_segment_access_rights(struct kvm_segment *var); > - > void vmx_vmexit(void); > > #define vmx_insn_failed(fmt...) \ > @@ -2826,7 +2824,7 @@ static void fix_rmode_seg(int seg, struct kvm_segment *save) > vmcs_write16(sf->selector, var.selector); > vmcs_writel(sf->base, var.base); > vmcs_write32(sf->limit, var.limit); > - vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(&var)); > + vmcs_write32(sf->ar_bytes, vmx_encode_ar_bytes(&var)); > } > > static void enter_rmode(struct kvm_vcpu *vcpu) > @@ -3217,7 +3215,6 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) > void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > - u32 ar; > > if (vmx->rmode.vm86_active && seg != VCPU_SREG_LDTR) { > *var = vmx->rmode.segs[seg]; > @@ -3231,23 +3228,7 @@ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) > var->base = vmx_read_guest_seg_base(vmx, seg); > var->limit = vmx_read_guest_seg_limit(vmx, seg); > var->selector = vmx_read_guest_seg_selector(vmx, seg); > - ar = vmx_read_guest_seg_ar(vmx, seg); > - var->unusable = (ar >> 16) & 1; > - var->type = ar & 15; > - var->s = (ar >> 4) & 1; > - var->dpl = (ar >> 5) & 3; > - /* > - * Some userspaces do not preserve unusable property. Since usable > - * segment has to be present according to VMX spec we can use present > - * property to amend userspace bug by making unusable segment always > - * nonpresent. vmx_segment_access_rights() already marks nonpresent > - * segment as unusable. > - */ > - var->present = !var->unusable; > - var->avl = (ar >> 12) & 1; > - var->l = (ar >> 13) & 1; > - var->db = (ar >> 14) & 1; > - var->g = (ar >> 15) & 1; > + vmx_decode_ar_bytes(vmx_read_guest_seg_ar(vmx, seg), var); > } > > static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg) > @@ -3273,26 +3254,6 @@ int vmx_get_cpl(struct kvm_vcpu *vcpu) > } > } > > -static u32 vmx_segment_access_rights(struct kvm_segment *var) > -{ > - u32 ar; > - > - if (var->unusable || !var->present) > - ar = 1 << 16; > - else { > - ar = var->type & 15; > - ar |= (var->s & 1) << 4; > - ar |= (var->dpl & 3) << 5; > - ar |= (var->present & 1) << 7; > - ar |= (var->avl & 1) << 12; > - ar |= (var->l & 1) << 13; > - ar |= (var->db & 1) << 14; > - ar |= (var->g & 1) << 15; > - } > - > - return ar; > -} > - > void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > @@ -3327,7 +3288,7 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) > if (is_unrestricted_guest(vcpu) && (seg != VCPU_SREG_LDTR)) > var->type |= 0x1; /* Accessed */ > > - vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(var)); > + vmcs_write32(sf->ar_bytes, vmx_encode_ar_bytes(var)); > > out: > vmx->emulation_required = emulation_required(vcpu); > @@ -3374,7 +3335,7 @@ static bool rmode_segment_valid(struct kvm_vcpu *vcpu, int seg) > var.dpl = 0x3; > if (seg == VCPU_SREG_CS) > var.type = 0x3; > - ar = vmx_segment_access_rights(&var); > + ar = vmx_encode_ar_bytes(&var); > > if (var.base != (var.selector << 4)) > return false; > Reviewed-by: Paolo Bonzini