From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21D80C433F5 for ; Wed, 13 Oct 2021 15:58:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03ECE611AD for ; Wed, 13 Oct 2021 15:58:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235893AbhJMQAm (ORCPT ); Wed, 13 Oct 2021 12:00:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231414AbhJMQAl (ORCPT ); Wed, 13 Oct 2021 12:00:41 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B139C061570 for ; Wed, 13 Oct 2021 08:58:38 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id s18-20020adfbc12000000b00160b2d4d5ebso2372989wrg.7 for ; Wed, 13 Oct 2021 08:58:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=aiLbfTOZ81dk7Ubs6Jh81CTkEkNUWHmmr8x/KGdngNpEYBP0Z9GSspCYasDJsokS8B 25KZrOjXeP6snrT3y16Ol8qeQ19bx/ldAPBQvNXukYoiUW0Ufy5hmjawfyXajTExU78p +Jtl2soJvLAYtF6s8fAP35c5qkaYm9mbyzZ42HC5BJyLrpcL4DWXdn2mWAiednc4H+nv o9Tnzs9v8p6vKewI9cHEr48DTa5cRelubAkDJ0eH3elTYaKbv8y5b/Mx2J+oogxsNLZZ ykwRSxEIBpLGJUfWcrt8p0vSO/pZ7aS2DZHRC+PFmJW6bfsYxHkoLpX00sU0YFYLQaAP NLxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=Fxk3UGcK7hJ9aI+YqurD2C7WovKWlnkakx84OtN9DcGS8UDrKRZnaZqWgHfvn9dUhi w6PhpwuX/Yex1s7tevc1ighNdTxarP14fRWq1PCCzNyHDJANgfTZCSt7wriFXv4bcu6L PEUFoM9ig/ogQ/HGi4thtddO46ZhLBYgah8+KH2iINE0ZSh1mihj+KkhyLUmprsVQLzD aJKkvuVAWYnx7s9rGe4ME0TJ8NcaN2R0ZhWWVSNY7migMVr57icAOlCUIBoUWNFriXxo 20hqX31Z4AzreKexRlL/XcskpxcCN11m5KVumJJwIu6RvSHs0qvXaonRe6CuLvXUYSkH mA8g== X-Gm-Message-State: AOAM531iGCK1avjhfvOKmrywDQDoEY39BmJRvHfABexvBmgZr3ydVTHM praJ7NQEJH9L+Pfvm7WHcmaqIgATjWSa X-Google-Smtp-Source: ABdhPJzsn1SgKf860FkEzGm1RZCUUM/Y+aQz9WqD6w1BPfROEP4vWn6pc5yiCYtsPSp0+MddQ+sTTMF4s46R X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:c5cc:: with SMTP id n12mr69756wmk.43.1634140716553; Wed, 13 Oct 2021 08:58:36 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:16 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-2-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 01/16] KVM: arm64: Introduce do_share() helper for memory sharing between components From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Will Deacon In preparation for extending memory sharing to include the guest as well as the hypervisor and the host, introduce a high-level do_share() helper which allows memory to be shared between these components without duplication of validity checks. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 5 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 315 ++++++++++++++++++ 2 files changed, 320 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b58c910babaf..56445586c755 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,6 +24,11 @@ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, + __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | + KVM_PGTABLE_PROT_SW1, + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE, }; #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index bacd493a4eac..53e503501044 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -443,3 +443,318 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) ret = host_stage2_idmap(addr); BUG_ON(ret && ret != -EAGAIN); } + +/* This corresponds to locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; + +struct pkvm_mem_transition { + u64 nr_pages; + + struct { + enum pkvm_component_id id; + u64 addr; + + union { + struct { + u64 completer_addr; + } host; + }; + } initiator; + + struct { + enum pkvm_component_id id; + } completer; +}; + +struct pkvm_mem_share { + struct pkvm_mem_transition tx; + enum kvm_pgtable_prot prot; +}; + +struct pkvm_page_req { + struct { + enum pkvm_page_state state; + u64 addr; + } initiator; + + struct { + u64 addr; + } completer; + + phys_addr_t phys; +}; + +struct pkvm_page_share_ack { + struct { + enum pkvm_page_state state; + phys_addr_t phys; + enum kvm_pgtable_prot prot; + } completer; +}; + +static void host_lock_component(void) +{ + hyp_spin_lock(&host_kvm.lock); +} + +static void host_unlock_component(void) +{ + hyp_spin_unlock(&host_kvm.lock); +} + +static void hyp_lock_component(void) +{ + hyp_spin_lock(&pkvm_pgd_lock); +} + +static void hyp_unlock_component(void) +{ + hyp_spin_unlock(&pkvm_pgd_lock); +} + +static int host_request_share(struct pkvm_page_req *req, + struct pkvm_mem_transition *tx, + u64 idx) +{ + u64 offset = idx * PAGE_SIZE; + enum kvm_pgtable_prot prot; + u64 host_addr; + kvm_pte_t pte; + int err; + + hyp_assert_lock_held(&host_kvm.lock); + + host_addr = tx->initiator.addr + offset; + err = kvm_pgtable_get_leaf(&host_kvm.pgt, host_addr, &pte, NULL); + if (err) + return err; + + if (!kvm_pte_valid(pte) && pte) + return -EPERM; + + prot = kvm_pgtable_stage2_pte_prot(pte); + *req = (struct pkvm_page_req) { + .initiator = { + .state = pkvm_getstate(prot), + .addr = host_addr, + }, + .completer = { + .addr = tx->initiator.host.completer_addr + offset, + }, + .phys = host_addr, + }; + + return 0; +} + +/* + * Populate the page-sharing request (@req) based on the share transition + * information from the initiator and its current page state. + */ +static int request_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share, + u64 idx) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_request_share(req, tx, idx); + default: + return -EINVAL; + } +} + +static int hyp_ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + enum pkvm_page_state state = PKVM_NOPAGE; + enum kvm_pgtable_prot prot = 0; + phys_addr_t phys = 0; + kvm_pte_t pte; + u64 hyp_addr; + int err; + + hyp_assert_lock_held(&pkvm_pgd_lock); + + if (perms != PAGE_HYP) + return -EPERM; + + hyp_addr = req->completer.addr; + err = kvm_pgtable_get_leaf(&pkvm_pgtable, hyp_addr, &pte, NULL); + if (err) + return err; + + if (kvm_pte_valid(pte)) { + state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_hyp_pte_prot(pte) & KVM_PGTABLE_PROT_RWX; + } + + *ack = (struct pkvm_page_share_ack) { + .completer = { + .state = state, + .phys = phys, + .prot = prot, + }, + }; + + return 0; +} + +/* + * Populate the page-sharing acknowledgment (@ack) based on the sharing request + * from the initiator and the current page state in the completer. + */ +static int ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_ack_share(ack, req, share->prot); + default: + return -EINVAL; + } +} + +/* + * Check that the page states in the initiator and the completer are compatible + * for the requested page-sharing operation to go ahead. + */ +static int check_share(struct pkvm_page_req *req, + struct pkvm_page_share_ack *ack, + struct pkvm_mem_share *share) +{ + if (!addr_is_memory(req->phys)) + return -EINVAL; + + if (req->initiator.state == PKVM_PAGE_OWNED && + ack->completer.state == PKVM_NOPAGE) { + return 0; + } + + if (req->initiator.state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + + if (ack->completer.state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + if (ack->completer.phys != req->phys) + return -EPERM; + + if (ack->completer.prot != share->prot) + return -EPERM; + + return 0; +} + +static int host_initiate_share(struct pkvm_page_req *req) +{ + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + return host_stage2_idmap_locked(req->initiator.addr, PAGE_SIZE, prot); +} + +/* Update the initiator's page-table for the page-sharing request */ +static int initiate_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_initiate_share(req); + default: + return -EINVAL; + } +} + +static int hyp_complete_share(struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + void *start = (void *)req->completer.addr, *end = start + PAGE_SIZE; + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); + return pkvm_create_mappings_locked(start, end, prot); +} + +/* Update the completer's page-table for the page-sharing request */ +static int complete_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_complete_share(req, share->prot); + default: + return -EINVAL; + } +} + +/* + * do_share(): + * + * The page owner grants access to another component with a given set + * of permissions. + * + * Initiator: OWNED => SHARED_OWNED + * Completer: NOPAGE => SHARED_BORROWED + * + * Note that we permit the same share operation to be repeated from the + * host to the hypervisor, as this removes the need for excessive + * book-keeping of shared KVM data structures at EL1. + */ +static int do_share(struct pkvm_mem_share *share) +{ + struct pkvm_page_req req; + int ret = 0; + u64 idx; + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + struct pkvm_page_share_ack ack; + + ret = request_share(&req, share, idx); + if (ret) + goto out; + + ret = ack_share(&ack, &req, share); + if (ret) + goto out; + + ret = check_share(&req, &ack, share); + if (ret) + goto out; + } + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + ret = request_share(&req, share, idx); + if (ret) + break; + + /* Allow double-sharing by skipping over the page */ + if (req.initiator.state == PKVM_PAGE_SHARED_OWNED) + continue; + + ret = initiate_share(&req, share); + if (ret) + break; + + ret = complete_share(&req, share); + if (ret) + break; + } + + WARN_ON(ret); +out: + return ret; +} -- 2.33.0.882.g93a45727a2-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09B99C433FE for ; Wed, 13 Oct 2021 15:58:42 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 893B0611AE for ; Wed, 13 Oct 2021 15:58:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 893B0611AE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 2D6CB4B092; Wed, 13 Oct 2021 11:58:41 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jQLC4Q+JwmsL; Wed, 13 Oct 2021 11:58:39 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id C347B4B10D; Wed, 13 Oct 2021 11:58:39 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id AD09F4B092 for ; Wed, 13 Oct 2021 11:58:38 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qnNzl+TJkwM2 for ; Wed, 13 Oct 2021 11:58:37 -0400 (EDT) Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 64BF14B0EC for ; Wed, 13 Oct 2021 11:58:37 -0400 (EDT) Received: by mail-wr1-f73.google.com with SMTP id j19-20020adfb313000000b00160a9de13b3so2359220wrd.8 for ; Wed, 13 Oct 2021 08:58:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=aiLbfTOZ81dk7Ubs6Jh81CTkEkNUWHmmr8x/KGdngNpEYBP0Z9GSspCYasDJsokS8B 25KZrOjXeP6snrT3y16Ol8qeQ19bx/ldAPBQvNXukYoiUW0Ufy5hmjawfyXajTExU78p +Jtl2soJvLAYtF6s8fAP35c5qkaYm9mbyzZ42HC5BJyLrpcL4DWXdn2mWAiednc4H+nv o9Tnzs9v8p6vKewI9cHEr48DTa5cRelubAkDJ0eH3elTYaKbv8y5b/Mx2J+oogxsNLZZ ykwRSxEIBpLGJUfWcrt8p0vSO/pZ7aS2DZHRC+PFmJW6bfsYxHkoLpX00sU0YFYLQaAP NLxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=HQH/fVRwSHacr6uy6+JKrRnDQYLBwL3889ZtTWYcV8jGmV10ANb6xdfXothQ1ESnUi yBuDpA3fwd6W8HOpiF0yOtVbCDZLVQJM5RW0gVG1SCdw2Ti0Z/9Ci55rrvdWCn5bJm6R L3Kkx+BNOQJgjyVNHCdPS7oE58CWYSVu/pHZri36GZrQl+pwf+lD6mTYDcv1KRI0Sz9l XovGVioM5v9tPQjal4lxF3eUrQHuTzKKRinVKjKvJa3Ve//eEq+d4A8UWcVeh+ZQE3MO FIE4MuTDZC9QfkYJ60LZ4UEPa9M4Pcpzn6heBwUnph2a44pzpIEasDZH7yF3+ERL8LLH votg== X-Gm-Message-State: AOAM533FFxzXkmHtgUrFMdoZCT4VxRQMrizPL+OPdMDSljzAO4JHtVv8 TpVF2tss+1on4l+Eb45EpLfVCCE6BVc6 X-Google-Smtp-Source: ABdhPJzsn1SgKf860FkEzGm1RZCUUM/Y+aQz9WqD6w1BPfROEP4vWn6pc5yiCYtsPSp0+MddQ+sTTMF4s46R X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:c5cc:: with SMTP id n12mr69756wmk.43.1634140716553; Wed, 13 Oct 2021 08:58:36 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:16 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-2-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 01/16] KVM: arm64: Introduce do_share() helper for memory sharing between components From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu From: Will Deacon In preparation for extending memory sharing to include the guest as well as the hypervisor and the host, introduce a high-level do_share() helper which allows memory to be shared between these components without duplication of validity checks. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 5 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 315 ++++++++++++++++++ 2 files changed, 320 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b58c910babaf..56445586c755 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,6 +24,11 @@ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, + __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | + KVM_PGTABLE_PROT_SW1, + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE, }; #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index bacd493a4eac..53e503501044 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -443,3 +443,318 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) ret = host_stage2_idmap(addr); BUG_ON(ret && ret != -EAGAIN); } + +/* This corresponds to locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; + +struct pkvm_mem_transition { + u64 nr_pages; + + struct { + enum pkvm_component_id id; + u64 addr; + + union { + struct { + u64 completer_addr; + } host; + }; + } initiator; + + struct { + enum pkvm_component_id id; + } completer; +}; + +struct pkvm_mem_share { + struct pkvm_mem_transition tx; + enum kvm_pgtable_prot prot; +}; + +struct pkvm_page_req { + struct { + enum pkvm_page_state state; + u64 addr; + } initiator; + + struct { + u64 addr; + } completer; + + phys_addr_t phys; +}; + +struct pkvm_page_share_ack { + struct { + enum pkvm_page_state state; + phys_addr_t phys; + enum kvm_pgtable_prot prot; + } completer; +}; + +static void host_lock_component(void) +{ + hyp_spin_lock(&host_kvm.lock); +} + +static void host_unlock_component(void) +{ + hyp_spin_unlock(&host_kvm.lock); +} + +static void hyp_lock_component(void) +{ + hyp_spin_lock(&pkvm_pgd_lock); +} + +static void hyp_unlock_component(void) +{ + hyp_spin_unlock(&pkvm_pgd_lock); +} + +static int host_request_share(struct pkvm_page_req *req, + struct pkvm_mem_transition *tx, + u64 idx) +{ + u64 offset = idx * PAGE_SIZE; + enum kvm_pgtable_prot prot; + u64 host_addr; + kvm_pte_t pte; + int err; + + hyp_assert_lock_held(&host_kvm.lock); + + host_addr = tx->initiator.addr + offset; + err = kvm_pgtable_get_leaf(&host_kvm.pgt, host_addr, &pte, NULL); + if (err) + return err; + + if (!kvm_pte_valid(pte) && pte) + return -EPERM; + + prot = kvm_pgtable_stage2_pte_prot(pte); + *req = (struct pkvm_page_req) { + .initiator = { + .state = pkvm_getstate(prot), + .addr = host_addr, + }, + .completer = { + .addr = tx->initiator.host.completer_addr + offset, + }, + .phys = host_addr, + }; + + return 0; +} + +/* + * Populate the page-sharing request (@req) based on the share transition + * information from the initiator and its current page state. + */ +static int request_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share, + u64 idx) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_request_share(req, tx, idx); + default: + return -EINVAL; + } +} + +static int hyp_ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + enum pkvm_page_state state = PKVM_NOPAGE; + enum kvm_pgtable_prot prot = 0; + phys_addr_t phys = 0; + kvm_pte_t pte; + u64 hyp_addr; + int err; + + hyp_assert_lock_held(&pkvm_pgd_lock); + + if (perms != PAGE_HYP) + return -EPERM; + + hyp_addr = req->completer.addr; + err = kvm_pgtable_get_leaf(&pkvm_pgtable, hyp_addr, &pte, NULL); + if (err) + return err; + + if (kvm_pte_valid(pte)) { + state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_hyp_pte_prot(pte) & KVM_PGTABLE_PROT_RWX; + } + + *ack = (struct pkvm_page_share_ack) { + .completer = { + .state = state, + .phys = phys, + .prot = prot, + }, + }; + + return 0; +} + +/* + * Populate the page-sharing acknowledgment (@ack) based on the sharing request + * from the initiator and the current page state in the completer. + */ +static int ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_ack_share(ack, req, share->prot); + default: + return -EINVAL; + } +} + +/* + * Check that the page states in the initiator and the completer are compatible + * for the requested page-sharing operation to go ahead. + */ +static int check_share(struct pkvm_page_req *req, + struct pkvm_page_share_ack *ack, + struct pkvm_mem_share *share) +{ + if (!addr_is_memory(req->phys)) + return -EINVAL; + + if (req->initiator.state == PKVM_PAGE_OWNED && + ack->completer.state == PKVM_NOPAGE) { + return 0; + } + + if (req->initiator.state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + + if (ack->completer.state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + if (ack->completer.phys != req->phys) + return -EPERM; + + if (ack->completer.prot != share->prot) + return -EPERM; + + return 0; +} + +static int host_initiate_share(struct pkvm_page_req *req) +{ + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + return host_stage2_idmap_locked(req->initiator.addr, PAGE_SIZE, prot); +} + +/* Update the initiator's page-table for the page-sharing request */ +static int initiate_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_initiate_share(req); + default: + return -EINVAL; + } +} + +static int hyp_complete_share(struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + void *start = (void *)req->completer.addr, *end = start + PAGE_SIZE; + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); + return pkvm_create_mappings_locked(start, end, prot); +} + +/* Update the completer's page-table for the page-sharing request */ +static int complete_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_complete_share(req, share->prot); + default: + return -EINVAL; + } +} + +/* + * do_share(): + * + * The page owner grants access to another component with a given set + * of permissions. + * + * Initiator: OWNED => SHARED_OWNED + * Completer: NOPAGE => SHARED_BORROWED + * + * Note that we permit the same share operation to be repeated from the + * host to the hypervisor, as this removes the need for excessive + * book-keeping of shared KVM data structures at EL1. + */ +static int do_share(struct pkvm_mem_share *share) +{ + struct pkvm_page_req req; + int ret = 0; + u64 idx; + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + struct pkvm_page_share_ack ack; + + ret = request_share(&req, share, idx); + if (ret) + goto out; + + ret = ack_share(&ack, &req, share); + if (ret) + goto out; + + ret = check_share(&req, &ack, share); + if (ret) + goto out; + } + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + ret = request_share(&req, share, idx); + if (ret) + break; + + /* Allow double-sharing by skipping over the page */ + if (req.initiator.state == PKVM_PAGE_SHARED_OWNED) + continue; + + ret = initiate_share(&req, share); + if (ret) + break; + + ret = complete_share(&req, share); + if (ret) + break; + } + + WARN_ON(ret); +out: + return ret; +} -- 2.33.0.882.g93a45727a2-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C47F4C433EF for ; Wed, 13 Oct 2021 16:01:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9108E61168 for ; Wed, 13 Oct 2021 16:01:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9108E61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=n7lPnXkQAwZhas0niRE4Ie7pDP1snxqET0eKhoVJ6mE=; b=jeXnljJDqjPIj7ArbOz27Rjqo/ 06G49VZjMg2UDHdCNicHdvMID6Yu1YbzSqZq6MnJw4231Uaox1FgHbl4TpPmD/R3UDtyoQnSI/hUR duFE9crmx4uVVxo3XrQIKuKEnU7aK18LsVHjhQms9mvK5LoYqklokuzsSToTvHBCFOuycsxOaArhd ngyo0R7hzdl7eb6lNLNyGCsmWijEl6uJxLpwSpKJe98Q5cllBi/6KMvwa5n41l8t4r4NFIe03PiGS V+MHKQCLcb5UTecu3LWHFyX4T8x0EoMpOt9h6ZpeIO4JnNordleC0w1Vs5dQBD3BMsfBDR4OxUMEd ksvP4SWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageg-00HTON-Us; Wed, 13 Oct 2021 15:58:59 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageM-00HTHu-Eg for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:40 +0000 Received: by mail-wr1-x44a.google.com with SMTP id r16-20020adfbb10000000b00160958ed8acso2328180wrg.16 for ; Wed, 13 Oct 2021 08:58:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=aiLbfTOZ81dk7Ubs6Jh81CTkEkNUWHmmr8x/KGdngNpEYBP0Z9GSspCYasDJsokS8B 25KZrOjXeP6snrT3y16Ol8qeQ19bx/ldAPBQvNXukYoiUW0Ufy5hmjawfyXajTExU78p +Jtl2soJvLAYtF6s8fAP35c5qkaYm9mbyzZ42HC5BJyLrpcL4DWXdn2mWAiednc4H+nv o9Tnzs9v8p6vKewI9cHEr48DTa5cRelubAkDJ0eH3elTYaKbv8y5b/Mx2J+oogxsNLZZ ykwRSxEIBpLGJUfWcrt8p0vSO/pZ7aS2DZHRC+PFmJW6bfsYxHkoLpX00sU0YFYLQaAP NLxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=5zvrSz/LxMbtkJFuvUdGdJlk+FwIMm1k91YZBR/h3ccMxqBiNWLEDLbMkV0AeKGEyg 0fuV/eQpoqHrpxPPor0vB2y/pqn913PSKX14VROpykjFih8gyLhtx7wJH1p23zE56amY A3wTesGSpSd9IT14lLwY8u1c0PVl4MImsSnXCfi5RpiDuDmfryjaeZlS7stXMrvagYmS Sia+FjCi5oEbIxpIopM9LtENvm0NDhYNg9UFU/L2Ank7Gx1rhFN97a27asgWN+sLPjjn Z0q9lW9XRcYab5FpXT9tnAsSsaYB5lPwIk+OwKvmIaZx7eAMenXmrQMHyw02nU2BlUXJ bOgg== X-Gm-Message-State: AOAM533hxUxnd//yd2uyQ4813Zx7sV4vebq74sBiRmTm1LE49EmBBmaP 7EmMNqfDzFYdmqYK0+KTGEEn1QSHyHzU X-Google-Smtp-Source: ABdhPJzsn1SgKf860FkEzGm1RZCUUM/Y+aQz9WqD6w1BPfROEP4vWn6pc5yiCYtsPSp0+MddQ+sTTMF4s46R X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:c5cc:: with SMTP id n12mr69756wmk.43.1634140716553; Wed, 13 Oct 2021 08:58:36 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:16 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-2-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 01/16] KVM: arm64: Introduce do_share() helper for memory sharing between components From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085838_552719_C65E187A X-CRM114-Status: GOOD ( 20.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon In preparation for extending memory sharing to include the guest as well as the hypervisor and the host, introduce a high-level do_share() helper which allows memory to be shared between these components without duplication of validity checks. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 5 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 315 ++++++++++++++++++ 2 files changed, 320 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b58c910babaf..56445586c755 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,6 +24,11 @@ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, + __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | + KVM_PGTABLE_PROT_SW1, + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE, }; #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index bacd493a4eac..53e503501044 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -443,3 +443,318 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) ret = host_stage2_idmap(addr); BUG_ON(ret && ret != -EAGAIN); } + +/* This corresponds to locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; + +struct pkvm_mem_transition { + u64 nr_pages; + + struct { + enum pkvm_component_id id; + u64 addr; + + union { + struct { + u64 completer_addr; + } host; + }; + } initiator; + + struct { + enum pkvm_component_id id; + } completer; +}; + +struct pkvm_mem_share { + struct pkvm_mem_transition tx; + enum kvm_pgtable_prot prot; +}; + +struct pkvm_page_req { + struct { + enum pkvm_page_state state; + u64 addr; + } initiator; + + struct { + u64 addr; + } completer; + + phys_addr_t phys; +}; + +struct pkvm_page_share_ack { + struct { + enum pkvm_page_state state; + phys_addr_t phys; + enum kvm_pgtable_prot prot; + } completer; +}; + +static void host_lock_component(void) +{ + hyp_spin_lock(&host_kvm.lock); +} + +static void host_unlock_component(void) +{ + hyp_spin_unlock(&host_kvm.lock); +} + +static void hyp_lock_component(void) +{ + hyp_spin_lock(&pkvm_pgd_lock); +} + +static void hyp_unlock_component(void) +{ + hyp_spin_unlock(&pkvm_pgd_lock); +} + +static int host_request_share(struct pkvm_page_req *req, + struct pkvm_mem_transition *tx, + u64 idx) +{ + u64 offset = idx * PAGE_SIZE; + enum kvm_pgtable_prot prot; + u64 host_addr; + kvm_pte_t pte; + int err; + + hyp_assert_lock_held(&host_kvm.lock); + + host_addr = tx->initiator.addr + offset; + err = kvm_pgtable_get_leaf(&host_kvm.pgt, host_addr, &pte, NULL); + if (err) + return err; + + if (!kvm_pte_valid(pte) && pte) + return -EPERM; + + prot = kvm_pgtable_stage2_pte_prot(pte); + *req = (struct pkvm_page_req) { + .initiator = { + .state = pkvm_getstate(prot), + .addr = host_addr, + }, + .completer = { + .addr = tx->initiator.host.completer_addr + offset, + }, + .phys = host_addr, + }; + + return 0; +} + +/* + * Populate the page-sharing request (@req) based on the share transition + * information from the initiator and its current page state. + */ +static int request_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share, + u64 idx) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_request_share(req, tx, idx); + default: + return -EINVAL; + } +} + +static int hyp_ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + enum pkvm_page_state state = PKVM_NOPAGE; + enum kvm_pgtable_prot prot = 0; + phys_addr_t phys = 0; + kvm_pte_t pte; + u64 hyp_addr; + int err; + + hyp_assert_lock_held(&pkvm_pgd_lock); + + if (perms != PAGE_HYP) + return -EPERM; + + hyp_addr = req->completer.addr; + err = kvm_pgtable_get_leaf(&pkvm_pgtable, hyp_addr, &pte, NULL); + if (err) + return err; + + if (kvm_pte_valid(pte)) { + state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_hyp_pte_prot(pte) & KVM_PGTABLE_PROT_RWX; + } + + *ack = (struct pkvm_page_share_ack) { + .completer = { + .state = state, + .phys = phys, + .prot = prot, + }, + }; + + return 0; +} + +/* + * Populate the page-sharing acknowledgment (@ack) based on the sharing request + * from the initiator and the current page state in the completer. + */ +static int ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_ack_share(ack, req, share->prot); + default: + return -EINVAL; + } +} + +/* + * Check that the page states in the initiator and the completer are compatible + * for the requested page-sharing operation to go ahead. + */ +static int check_share(struct pkvm_page_req *req, + struct pkvm_page_share_ack *ack, + struct pkvm_mem_share *share) +{ + if (!addr_is_memory(req->phys)) + return -EINVAL; + + if (req->initiator.state == PKVM_PAGE_OWNED && + ack->completer.state == PKVM_NOPAGE) { + return 0; + } + + if (req->initiator.state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + + if (ack->completer.state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + if (ack->completer.phys != req->phys) + return -EPERM; + + if (ack->completer.prot != share->prot) + return -EPERM; + + return 0; +} + +static int host_initiate_share(struct pkvm_page_req *req) +{ + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + return host_stage2_idmap_locked(req->initiator.addr, PAGE_SIZE, prot); +} + +/* Update the initiator's page-table for the page-sharing request */ +static int initiate_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_initiate_share(req); + default: + return -EINVAL; + } +} + +static int hyp_complete_share(struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + void *start = (void *)req->completer.addr, *end = start + PAGE_SIZE; + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); + return pkvm_create_mappings_locked(start, end, prot); +} + +/* Update the completer's page-table for the page-sharing request */ +static int complete_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_complete_share(req, share->prot); + default: + return -EINVAL; + } +} + +/* + * do_share(): + * + * The page owner grants access to another component with a given set + * of permissions. + * + * Initiator: OWNED => SHARED_OWNED + * Completer: NOPAGE => SHARED_BORROWED + * + * Note that we permit the same share operation to be repeated from the + * host to the hypervisor, as this removes the need for excessive + * book-keeping of shared KVM data structures at EL1. + */ +static int do_share(struct pkvm_mem_share *share) +{ + struct pkvm_page_req req; + int ret = 0; + u64 idx; + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + struct pkvm_page_share_ack ack; + + ret = request_share(&req, share, idx); + if (ret) + goto out; + + ret = ack_share(&ack, &req, share); + if (ret) + goto out; + + ret = check_share(&req, &ack, share); + if (ret) + goto out; + } + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + ret = request_share(&req, share, idx); + if (ret) + break; + + /* Allow double-sharing by skipping over the page */ + if (req.initiator.state == PKVM_PAGE_SHARED_OWNED) + continue; + + ret = initiate_share(&req, share); + if (ret) + break; + + ret = complete_share(&req, share); + if (ret) + break; + } + + WARN_ON(ret); +out: + return ret; +} -- 2.33.0.882.g93a45727a2-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel