From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7DFEC433E0 for ; Wed, 3 Feb 2021 17:47:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EADF64E4F for ; Wed, 3 Feb 2021 17:47:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232107AbhBCRr2 (ORCPT ); Wed, 3 Feb 2021 12:47:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231794AbhBCRrX (ORCPT ); Wed, 3 Feb 2021 12:47:23 -0500 Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C7FBC0613D6 for ; Wed, 3 Feb 2021 09:46:42 -0800 (PST) Received: by mail-io1-xd35.google.com with SMTP id n201so134694iod.12 for ; Wed, 03 Feb 2021 09:46:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=dzh6Szd68SNbk4V2VI6tuXFXXPHzJoleMwdVCBlnvgU=; b=Jq3Efsv/7vUYP40QeFUNejML2XmCm9ySZisia3boixp+rou/miUrGoX/23QXEWY0nq fa+/HUz8/GxIjLWgJ+7Ip8Dr4izCHXA+VweL82YBSfJtrgWLN+TFKcyfA0vwsZBukONS icmxX8a5Pzfc29plYqUHYjETk5jrtgfn8yTwdV2ugRP27y359nQUq4JSJMbqzHp82OsL N8cHdX3Xyh2dFeUQLNiUIBXfyfqIWBMi1SOL0D/RYg9HpqMbonAudeOlGYL27a4v7bOD RZ7RJbC/wCT5FD8bOwmdpq3MXPtW2cRZofasFD6hDLsxxNeenLYeFI0C7Vb70lFFtp4+ hlNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=dzh6Szd68SNbk4V2VI6tuXFXXPHzJoleMwdVCBlnvgU=; b=GuIDGkGAmrj8Rqj6b4gbxd+sYHceTLdIaGEZcdeskbBn2hyOdUpRYh04R2W34qVLHj OZWoUQebMec/YhRil/owewPWdOdZzI1hRCSDc1q+c4P+qhysJ6nNFHlOueUUPHHfA/+S esRo7kjLXRWJwbN3LviYcoUQyHDhMORUWNQkAcx1ZjJUMnxJDNy5IVfhb+dN6vJvFs7K GrNCUqJKz/BAzaVZ+ayc2VitMoVk7dv07gyD9wWl8ukuIp1CWkNQH/nyueOclgz/eZup heiIwOjU6L2c9fU3mq5Hr7M/3l/Ayz36abrbPrT7uUJ+Gq3TUJNPkIQYC/1IGFe0MlJO vQLw== X-Gm-Message-State: AOAM531I2fCkTAPvguF6/kSCfPRLcVBvHMX/Vw2NxLqbGwM7LTnM1Hft /C/ku4HerJlliyOigtGo166GNalsyPTyRQ2sRo5Gbg== X-Google-Smtp-Source: ABdhPJwV/n6GtQBrLYdwdPxlhTaz5T6VrhI384pIUPj4Jh0GcYlZaFZ7DFmw2DUegateQtjgMHdC8NnmowO56cg8ZHE= X-Received: by 2002:a5d:8155:: with SMTP id f21mr3440813ioo.9.1612374401592; Wed, 03 Feb 2021 09:46:41 -0800 (PST) MIME-Version: 1.0 References: <20210202185734.1680553-1-bgardon@google.com> <20210202185734.1680553-24-bgardon@google.com> In-Reply-To: From: Ben Gardon Date: Wed, 3 Feb 2021 09:46:30 -0800 Message-ID: Subject: Re: [PATCH v2 23/28] KVM: x86/mmu: Allow parallel page faults for the TDP MMU To: Paolo Bonzini Cc: LKML , kvm , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 3, 2021 at 4:40 AM Paolo Bonzini wrote: > > On 02/02/21 19:57, Ben Gardon wrote: > > > > - write_lock(&vcpu->kvm->mmu_lock); > > + > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > + read_lock(&vcpu->kvm->mmu_lock); > > + else > > + write_lock(&vcpu->kvm->mmu_lock); > > + > > I'd like to make this into two helper functions, but I'm not sure about > the naming: > > - kvm_mmu_read_lock_for_root/kvm_mmu_read_unlock_for_root: not precise > because it's really write-locked for shadow MMU roots > > - kvm_mmu_lock_for_root/kvm_mmu_unlock_for_root: not clear that TDP MMU > operations will need to operate in shared-lock mode > > I prefer the first because at least it's the conservative option, but > I'm open to other opinions and suggestions. > > Paolo > Of the above two options, I like the second one, though I'd be happy with either. I agree the first is more conservative, in that it's clear the MMU lock could be shared. It feels a little misleading, though to have read in the name of the function but then acquire the write lock, especially since there's code below that which expects the write lock. I don't know of a good way to abstract this into a helper without some comments to make it clear what's going on, but maybe there's a slightly more open-coded compromise: if (!kvm_mmu_read_lock_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) write_lock(&vcpu->kvm->mmu_lock); or enum kvm_mmu_lock_mode lock_mode = get_mmu_lock_mode_for_root(vcpu->kvm, vcpu->arch.mmu->root_hpa); .... kvm_mmu_lock_for_mode(lock_mode); Not sure if either of those are actually clearer, but the latter trends in the direction the RCF took, having an enum to capture read/write and whether or not yo yield in a lock mode parameter.