From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50DFCC433EF for ; Fri, 10 Sep 2021 15:19:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 39C4660FDC for ; Fri, 10 Sep 2021 15:19:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234381AbhIJPUk (ORCPT ); Fri, 10 Sep 2021 11:20:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234263AbhIJPUi (ORCPT ); Fri, 10 Sep 2021 11:20:38 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DD73C061756 for ; Fri, 10 Sep 2021 08:19:27 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id t20so1585380pju.5 for ; Fri, 10 Sep 2021 08:19:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Zjxt7+clbNQQvd9ZPDVPHwVKwjpRnv02bpP5uIHrJKg=; b=BSW0AkLpwHwkLPCavwIQLHhDqfV4NbmcpdKNa4qAiJDDR/Ffmb+Wtwbx8Bt8ZxeJE3 PjjR+Zv7tQgVQH034Xo+cOhBsPJF3yz3KcAEgkfiIMAcW9cZVhZ1bjCheh03AUfM2krn zgR4r5kqqKYO9t3WRupK2hVksv8E0rAmND5cCmXJ4NH4M8dHpbMJhI3jBZeqO93A5oVo FE3gtrzBJgzXiswBs2iMADjPPntjctFg88v3Oz0jEgbPnXOs+nH9kp2yTOsF5RRW+Eiy H0MSH/7khZbRjKSU2TiyHxP2u6xlns1Rf01Ck3a0xD7t0qGYPU3jAXy+So4mt+j6IVLe miVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Zjxt7+clbNQQvd9ZPDVPHwVKwjpRnv02bpP5uIHrJKg=; b=MYrQNnqv1LF0QLuec9reV7zsoVkCDOlee3/xWc7huz132fakMu1nL4K2FktOWXqmXC CCTQkOPeSQuYwxUMQVtrWnGp0onXp+qsoPG4DVf/d6pd8bl6dJOpQ0uOv/r7HnJ2Fl4c EzTDePr8nDUYvqUXA570iW2pNIqhiWoYCiGT3RKHdFkLW/iG+35gJ2dQM8XUM/YZ9EuM RHHN4SBo7he2rnksA1GdFKPLStVmxLXvX+WDFtYigyFqcUeBGriykUESlvcn4AEDMZFA yBjBvYkU09++Y5jL3vjZuwoN1Vbtu1/foo+u3k6pyibN6ndpHvA23UUZi+JN2baheZG5 DIsQ== X-Gm-Message-State: AOAM531EsujTV9s4Ey0pIBq6e3c+MO+Cs06SHtJBETSBxjcn2wB36QXR iSQP5T3/+VFJnUDPO8oUCqaBsg== X-Google-Smtp-Source: ABdhPJx0LplRfx25sKT8ANeINOGLOmRhkwpXlJkZIuK/9ZEaK+VT6H/9FHBkxlc67h4EVYpANLx2VA== X-Received: by 2002:a17:90a:19c3:: with SMTP id 3mr8938575pjj.23.1631287166324; Fri, 10 Sep 2021 08:19:26 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id g18sm5027871pfj.80.2021.09.10.08.19.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Sep 2021 08:19:25 -0700 (PDT) Date: Fri, 10 Sep 2021 15:19:22 +0000 From: Sean Christopherson To: Xiaoyao Li Cc: Chenyi Qiang , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] KVM: nVMX: Fix nested bus lock VM exit Message-ID: References: <20210827085110.6763-1-chenyi.qiang@intel.com> <0f064b93-8375-8cba-6422-ff12f95af656@intel.com> <56fa664d-c4e5-066b-2bc8-2f1d2e74b35a@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56fa664d-c4e5-066b-2bc8-2f1d2e74b35a@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 10, 2021, Xiaoyao Li wrote: > On 9/10/2021 1:59 AM, Sean Christopherson wrote: > > No, nested_vmx_l0_wants_exit() is specifically for cases where L0 wants to handle > > the exit even if L1 also wants to handle the exit. For cases where L0 is expected > > to handle the exit because L1 does _not_ want the exit, the intent is to not have > > an entry in nested_vmx_l0_wants_exit(). This is a bit of a grey area, arguably L0 > > "wants" the exit because L0 knows BUS_LOCK cannot be exposed to L1. > > No. What I wanted to convey here is exactly "L0 wants to handle it because > L0 wants it, and no matter L1 wants it or not (i.e., even if L1 wants it) ", > not "L0 wants it because the feature not exposed to L1/L1 cannot enable it". > > Even for the future case that this feature is exposed to L1, and both L0 and > L1 enable it. It should exit to L0 first for every bus lock happened in L2 > VM and after L0 handles it, L0 needs to inject a BUS LOCK VM exit to L1 if > L1 enables it. Every bus lock acquired in L2 VM should be regarded as the > bus lock happened in L1 VM as well. L2 VM is just an application of L1 VM. > > IMO, the flow should be: > > if (L0 enables it) { > exit to L0; > L0 handling; > if (is_guest_mode(vcpu) && L1 enables it) { > inject BUS_LOCK VM EXIT to L1; > } > } else if (L1 enables it) { > BUS_LOCK VM exit to L1; > } else { > BUG(); > } Ah, we've speculated differently on how nested support would operate. Let's go with the original patch plus a brief comment stating it's never exposed to L1. Since that approach doesn't speculate, it can't be wrong. :-) Thanks!