From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BF37C77B73 for ; Thu, 20 Apr 2023 15:17:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231829AbjDTPRg (ORCPT ); Thu, 20 Apr 2023 11:17:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230089AbjDTPRd (ORCPT ); Thu, 20 Apr 2023 11:17:33 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C5F71721 for ; Thu, 20 Apr 2023 08:17:31 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-63b5e149dc2so835662b3a.0 for ; Thu, 20 Apr 2023 08:17:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682003851; x=1684595851; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0Fvh5Bs16xnW1weT2SPid3PSg+u8/Q14PbTNxokyuWo=; b=0Lwk6zvT1sdE3h8JzGU30U5B2GAF3IGSbkbBGobwWjNzCZvLtIOzkNc+aveiepxTAs WaAK/i7YbEhDG2NDLUpNX6THYw8haqXyojuWZ4wmeO7iGc3cq8HwF3oUxq8Ol92O1z2T hi4MCRBBTGq2u6EMEsQiLxQDCm+QEsU263UVSRUL2PE43LmOVPxu0eFMHRPbEkDg4TfT uobdLbwf2YItMZtIgpTxQ2rIfnYXNx+lpmvnNGMtHXSo0bDbtU6H888CU9MzugYu3YZg DWMBre8DaUAzW3gfyRElbNJemM8dx6YakEoK9mD3yLdn1MRKe/fRStX26QgHS1FoeyUV uNxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682003851; x=1684595851; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0Fvh5Bs16xnW1weT2SPid3PSg+u8/Q14PbTNxokyuWo=; b=UHxh1AIG6Qmw102HQxGqiLhILr9NJsqBvFMn0O/g7Z9EqOYEDlODFW6+i1FL2E05RD iot+ip2kYmjgkxriAy6UdhNZI8N85wWR+FRxS8StWZMvus97lwDvOa43lUCyGgV2Cfax 295B1i/Pkw3poFGWg+42HjrxvL7EzrHV9Opd1F4T9uNDKpI9Z7oVAgEoqk/otewkurNx Ki8hxb/64Da9zoKIkVcBCCwAwcK5rJsVZyvq/vv3ogPirRMssLef9WxjL8vfY0FG+BaF xEskidExBo+ezrL+D76tlbb0evTM7/Kibd0zIHdQkw2pQFQaTjUlZOAYy/GPwvG1HKcm e37A== X-Gm-Message-State: AAQBX9eXYS9QDWXz5eVaw5hqntsqVKK0QsKWOO0ro4ym7gBb6SrYTCPS 9caD6OmYaQi4ziEg0HRbQMeiTyuw+Lk= X-Google-Smtp-Source: AKy350ZBzilYp4PKzE37W4X5nKyyeFADvasG2SeZc5SDTlWjPiAdZDU+NJV1a/nzmUjYdsysw7dwNLoKUeU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:50d:b0:1a6:3fb2:f52d with SMTP id jn13-20020a170903050d00b001a63fb2f52dmr618731plb.3.1682003850946; Thu, 20 Apr 2023 08:17:30 -0700 (PDT) Date: Thu, 20 Apr 2023 08:17:29 -0700 In-Reply-To: <20230419221716.3603068-11-atishp@rivosinc.com> Mime-Version: 1.0 References: <20230419221716.3603068-1-atishp@rivosinc.com> <20230419221716.3603068-11-atishp@rivosinc.com> Message-ID: Subject: Re: [RFC 10/48] RISC-V: KVM: Implement static memory region measurement From: Sean Christopherson To: Atish Patra Cc: linux-kernel@vger.kernel.org, Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , "=?iso-8859-1?Q?Bj=F6rn_T=F6pel?=" , Suzuki K Poulose , Will Deacon , Marc Zyngier , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Content-Type: text/plain; charset="us-ascii" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 19, 2023, Atish Patra wrote: > +int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove_measure_region *mr) > +{ > + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; > + int rc = 0, idx, num_pages; > + struct kvm_riscv_cove_mem_region *conf; > + struct page *pinned_page, *conf_page; > + struct kvm_riscv_cove_page *cpage; > + > + if (!tvmc) > + return -EFAULT; > + > + if (tvmc->finalized_done) { > + kvm_err("measured_mr pages can not be added after finalize\n"); > + return -EINVAL; > + } > + > + num_pages = bytes_to_pages(mr->size); > + conf = &tvmc->confidential_region; > + > + if (!IS_ALIGNED(mr->userspace_addr, PAGE_SIZE) || > + !IS_ALIGNED(mr->gpa, PAGE_SIZE) || !mr->size || > + !cove_is_within_region(conf->gpa, conf->npages << PAGE_SHIFT, mr->gpa, mr->size)) > + return -EINVAL; > + > + idx = srcu_read_lock(&kvm->srcu); > + > + /*TODO: Iterate one page at a time as pinning multiple pages fail with unmapped panic > + * with a virtual address range belonging to vmalloc region for some reason. I've no idea what code you had, but I suspect the fact that vmalloc'd memory isn't guaranteed to be physically contiguous is relevant to the panic. > + */ > + while (num_pages) { > + if (signal_pending(current)) { > + rc = -ERESTARTSYS; > + break; > + } > + > + if (need_resched()) > + cond_resched(); > + > + rc = get_user_pages_fast(mr->userspace_addr, 1, 0, &pinned_page); > + if (rc < 0) { > + kvm_err("Pinning the userpsace addr %lx failed\n", mr->userspace_addr); > + break; > + } > + > + /* Enough pages are not available to be pinned */ > + if (rc != 1) { > + rc = -ENOMEM; > + break; > + } > + conf_page = alloc_page(GFP_KERNEL | __GFP_ZERO); > + if (!conf_page) { > + rc = -ENOMEM; > + break; > + } > + > + rc = cove_convert_pages(page_to_phys(conf_page), 1, true); > + if (rc) > + break; > + > + /*TODO: Support other pages sizes */ > + rc = sbi_covh_add_measured_pages(tvmc->tvm_guest_id, page_to_phys(pinned_page), > + page_to_phys(conf_page), SBI_COVE_PAGE_4K, > + 1, mr->gpa); > + if (rc) > + break; > + > + /* Unpin the page now */ > + put_page(pinned_page); > + > + cpage = kmalloc(sizeof(*cpage), GFP_KERNEL_ACCOUNT); > + if (!cpage) { > + rc = -ENOMEM; > + break; > + } > + > + cpage->page = conf_page; > + cpage->npages = 1; > + cpage->gpa = mr->gpa; > + cpage->hva = mr->userspace_addr; Snapshotting the userspace address for the _source_ page can't possibly be useful. > + cpage->is_mapped = true; > + INIT_LIST_HEAD(&cpage->link); > + list_add(&cpage->link, &tvmc->measured_pages); > + > + mr->userspace_addr += PAGE_SIZE; > + mr->gpa += PAGE_SIZE; > + num_pages--; > + conf_page = NULL; > + > + continue; > + } > + srcu_read_unlock(&kvm->srcu, idx); > + > + if (rc < 0) { > + /* We don't to need unpin pages as it is allocated by the hypervisor itself */ This comment makes no sense. The above code is doing all of the allocation and pinning, which strongly suggests that KVM is the hypervisor. But this comment implies that KVM is not the hypervisor. And "pinned_page" is cleared unpinned in the loop after the page is added+measured, which looks to be the same model as TDX where "pinned_page" is the source and "conf_page" is gifted to the hypervisor. But on failure, e.g. when allocating "conf_page", that reference is not put. > + cove_delete_page_list(kvm, &tvmc->measured_pages, false); > + /* Free the last allocated page for which conversion/measurement failed */ > + kfree(conf_page); Assuming my guesses about how the architecture works are correct, this is broken if sbi_covh_add_measured_pages() fails. The page has already been gifted to the TSM by cove_convert_pages(), but there is no call to sbi_covh_tsm_reclaim_pages(), which I'm guessing is necesary to transition the page back to a state where it can be safely used by the host.