From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBB5DECAAA1 for ; Fri, 9 Sep 2022 10:46:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231354AbiIIKqT (ORCPT ); Fri, 9 Sep 2022 06:46:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231193AbiIIKpT (ORCPT ); Fri, 9 Sep 2022 06:45:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1333B3D595 for ; Fri, 9 Sep 2022 03:45:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662720318; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c0NgkoxjaxWZzPMna+r8g36cHTkfcmxMVnLk3I8tBk0=; b=fWqeG7Wyo4W7OKCfpYA05ncHYtS+mNME7k9G8zPqGoNWnixbtplWU0W1YyF0i4aFvEFsQv 98faVJPCXwKo3zYVGXpxeIRFR36sXQpft5H6yWFCoPn1SSQLxgqf9a7kTGs6uUjlqCe9ne Ozl3xQ1YxkwFIDS1rtNP+S4axewU+jc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-614-MkxgwddvMM-9vnd2TjolAQ-1; Fri, 09 Sep 2022 06:45:13 -0400 X-MC-Unique: MkxgwddvMM-9vnd2TjolAQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 52A2C101A56C; Fri, 9 Sep 2022 10:45:12 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id E512E40D282E; Fri, 9 Sep 2022 10:45:11 +0000 (UTC) From: Emanuele Giuseppe Esposito To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , David Hildenbrand , Maxim Levitsky , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Emanuele Giuseppe Esposito Subject: [RFC PATCH 7/9] kvm_main.c: duplicate invalid memslot also in inactive list Date: Fri, 9 Sep 2022 06:45:04 -0400 Message-Id: <20220909104506.738478-8-eesposit@redhat.com> In-Reply-To: <20220909104506.738478-1-eesposit@redhat.com> References: <20220909104506.738478-1-eesposit@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for atomic memslot updates, make sure the invalid memslot is also replacing the old one in the inactive list. This implies that once we want to insert the new slot for a MOVE, or simply delete the existing one for a DELETE, we need to remove the "invalid" slot, not the "old" one. Signed-off-by: Emanuele Giuseppe Esposito --- virt/kvm/kvm_main.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6b73615891f0..31e46f9a06fa 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1830,6 +1830,7 @@ static int kvm_prepare_memslot(struct kvm *kvm, } batch->invalid = invalid_slot; kvm_invalidate_memslot(kvm, old, invalid_slot); + kvm_replace_memslot(kvm, old, invalid_slot); } r = kvm_prepare_memory_region(kvm, batch); @@ -1900,10 +1901,14 @@ static int kvm_set_memslot(struct kvm *kvm, return r; /* - * if change is DELETE or MOVE, invalid is in active memslots - * and old in inactive, so replace old with new. + * if change is DELETE or MOVE, invalid is in both active and inactive + * memslot list. This means that we don't need old anymore, and + * we should replace invalid with new. */ - kvm_replace_memslot(kvm, batch->old, batch->new); + if (batch->change == KVM_MR_DELETE || batch->change == KVM_MR_MOVE) + kvm_replace_memslot(kvm, batch->invalid, batch->new); + else + kvm_replace_memslot(kvm, batch->old, batch->new); /* either old or invalid is the same, since invalid is old's copy */ as_id = kvm_memslots_get_as_id(batch->old, batch->new); -- 2.31.1