From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC70EC433B4 for ; Thu, 29 Apr 2021 07:02:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 887EE6144B for ; Thu, 29 Apr 2021 07:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237323AbhD2HDc (ORCPT ); Thu, 29 Apr 2021 03:03:32 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:49902 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232511AbhD2HDc (ORCPT ); Thu, 29 Apr 2021 03:03:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619679766; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qZ2o/KezshQ3s8sCIRalLHQIdE/FL4CqZWe7FVkZ9EE=; b=dgm3hVWwkvxLAKS/F2Vr4zK6Bx2oJvslEhvsOrbNq0/KFLX3hwkufLfvkNXNnOVXnAvpdv eeqfLte4thnik8ZjkwNQgbfaJjMni/VfNbnBGpdMHbp1MTHy676iPkffTpzvrGMespt+K6 Qe47tyVrAmQBlkCcTiK4sqkTJqKc67M= Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com [209.85.218.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-342-lry2nGfeM5ChXebEiiXKLg-1; Thu, 29 Apr 2021 03:02:44 -0400 X-MC-Unique: lry2nGfeM5ChXebEiiXKLg-1 Received: by mail-ej1-f70.google.com with SMTP id p25-20020a1709061419b0290378364a6464so13110173ejc.15 for ; Thu, 29 Apr 2021 00:02:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:subject:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=qZ2o/KezshQ3s8sCIRalLHQIdE/FL4CqZWe7FVkZ9EE=; b=cP7apTOLpYOh4RQPvH7o4ZiguhrbjFxmx9SH/BXotSnKnH9cGdzXxJOkrbvcInycab y32GWOhE8scDn2/y24gQRhHgSx0cY2XapYinRIP2rmECVjgREbFpY6BMWa/7zA0eLW3R QasjFANLAIlttMFgfPTv2cBBCpsPsNDc3IrYCSuXctOLpyC+HaUhXIXID05sAdX9SMQE Lh2wrLnQXIk+ZBMPtrOXv11kv/vndA2pm3mHQoH4TirIOP1ScVwWxomrfRUpyKfN9SOg uJZChG/cxQaerny8yQwYbY91pE7XkmgV/4Zb6zqdC9bfvLETodN7ND9gxAwU8yp5dHcg xAcw== X-Gm-Message-State: AOAM5308dg98OkHuwrVEQU8NnK91HohljdijxszaSupTBDWLFMpQMK79 MV2ydSvdnZJARPFhTYUjkzHzsMJ3c31do5H+4P+CZCbaAUMBa4O8JDoK0QAPLn7T+BSxTMQCKkX Dqo9/ke2fDFEN X-Received: by 2002:a17:906:4d8d:: with SMTP id s13mr271879eju.37.1619679762720; Thu, 29 Apr 2021 00:02:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyQO6zVuK/9mdURJ4nRGjsTW6eznsN5Bs+Xz5YGyi2vHuoRxwoF2gSsPE1agXie+E0ub1kMKg== X-Received: by 2002:a17:906:4d8d:: with SMTP id s13mr271861eju.37.1619679762524; Thu, 29 Apr 2021 00:02:42 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:63a7:c72e:ea0e:6045? ([2001:b07:6468:f312:63a7:c72e:ea0e:6045]) by smtp.gmail.com with ESMTPSA id x7sm1672865eds.11.2021.04.29.00.02.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 29 Apr 2021 00:02:41 -0700 (PDT) To: Sean Christopherson Cc: Ben Gardon , LKML , kvm , Peter Xu , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong References: <20210427223635.2711774-1-bgardon@google.com> <20210427223635.2711774-6-bgardon@google.com> <997f9fe3-847b-8216-c629-1ad5fdd2ffae@redhat.com> <5b4a0c30-118c-da1f-281c-130438a1c833@redhat.com> <16b2f0f3-c9a8-c455-fff0-231c2fe04a8e@redhat.com> From: Paolo Bonzini Subject: Re: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex Message-ID: <623c2305-91ae-4617-357e-fe7d9147b656@redhat.com> Date: Thu, 29 Apr 2021 09:02:39 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On 29/04/21 02:40, Sean Christopherson wrote: > On Thu, Apr 29, 2021, Paolo Bonzini wrote: >> it's not ugly and it's still relatively easy to explain. > > LOL, that's debatable. From your remark below it looks like we have different priorities on what to avoid modifying. I like the locks to be either very coarse or fine-grained enough for them to be leaves, as I find that to be the easiest way to avoid deadlocks and complex hierarchies. For this reason, I treat unlocking in the middle of a large critical section as "scary by default"; you have to worry about which invariants might be required (in the case of RCU, which pointers might be stored somewhere and would be invalidated), and which locks are taken at that point so that the subsequent relocking would change the lock order from AB to BA. This applies to every path leading to the unlock/relock. So instead what matters IMO is shielding architecture code from the races that Ben had to point out to me, _and the possibility to apply easily explained rules_ outside more complex core code. So, well, "relatively easy" because it's indeed subtle. But if you consider what the locking rules are, "you can choose to protect slots->arch data with this mutex and it will have no problematic interactions with the memslot copy/update code" is as simple as it can get. >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index 2799c6660cce..48929dd5fb29 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -1377,16 +1374,17 @@ static int kvm_set_memslot(struct kvm *kvm, >> goto out_slots; >> update_memslots(slots, new, change); >> - slots = install_new_memslots(kvm, as_id, slots); >> + install_new_memslots(kvm, as_id, slots); >> kvm_arch_commit_memory_region(kvm, mem, old, new, change); >> - >> - kvfree(slots); >> return 0; >> out_slots: >> - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) >> + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { >> + slot = id_to_memslot(slots, old->id); >> + slot->flags &= ~KVM_MEMSLOT_INVALID; > > Modifying flags on an SRCU-protect field outside of said protection is sketchy. > It's probably ok to do this prior to the generation update, emphasis on > "probably". Of course, the VM is also likely about to be killed in this case... > >> slots = install_new_memslots(kvm, as_id, slots); > > This will explode if memory allocation for KVM_MR_MOVE fails. In that case, > the rmaps for "slots" will have been cleared by kvm_alloc_memslot_metadata(). I take your subsequent reply as a sort-of-review that the above approach works, though we may disagree on its elegance and complexity. Paolo > The SRCU index is already tracked in vcpu->srcu_idx, why not temporarily drop > the SRCU lock if activate_shadow_mmu() needs to do work so that it can take > slots_lock? That seems simpler and I think would avoid modifying the common > memslot code. > > kvm_arch_async_page_ready() is the only path for reaching kvm_mmu_reload() that > looks scary, but that should be impossible to reach with the correct MMU context. > We could always and an explicit sanity check on the rmaps being avaiable.