From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 502EBC43334 for ; Fri, 10 Jun 2022 21:02:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347717AbiFJVB6 (ORCPT ); Fri, 10 Jun 2022 17:01:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346839AbiFJVBz (ORCPT ); Fri, 10 Jun 2022 17:01:55 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD02749F28 for ; Fri, 10 Jun 2022 14:01:53 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id t2so236756pld.4 for ; Fri, 10 Jun 2022 14:01:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=9j+lUZ+Mcfs1Jcoa4etI2TPzedTT1LtFJEr4fDqaygA=; b=NKflmcL3DRRoEVI9aA2myzxnaV92k/hIea6L4aJUQzCGDuHMfvHPgs1/rD0/Vm7kOy ZTQxPAjfe1ez1G30H2jihUbHdbUmFFXTAmnEYfpv2Nrs8pSzHaktoorFzSEvDx13LGG6 UkPqWHPJiteD974QKgN0Tq5hW/vfR9VBK4nDGXkd4ymYHM1wEZ0IG2QZSq6d720hjieK TBepncYM2w+AqxgoR/le9W1+ez7DFwX0gcDPeo3JWb0yCnkE1FiidcdGAskXT7g3L5ia jvDiTdAx3e9ZCPlt3uX1o7eE7x+2h6VjNxbWH5BbvnNP3NNWqZO+PUjGttxVaxnRNLTU +9vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=9j+lUZ+Mcfs1Jcoa4etI2TPzedTT1LtFJEr4fDqaygA=; b=xyoPeTl1zHtRwuDcn8RV1piao2dekJ2EXL1WqcCQyqvHgDkVZ5cO8YdJvkhr0/W3G6 ElS1BrZORVnIgaw4VKHdeALP+HhVXj8lokavSYkmNhrFqld59JajLF0T5EnpFpxWaWwb v8uwT8zzpG0pEw3eP2T4TkpgevbScbFteg8mNM0l06cStGj8HwJjTqW6qkKi6CuER2xh WysMBcvJcHfadR3M5NQ+qCc8JQ1wCmtXlWScHhaYd66lL9H4gKySDOMcC8oD0JQo91OQ TIIQZG5/2xpNtL1z1rK/dJTISnyztds/f8XiMhDfCIzeHT677Uwvl0WT1eZ4YTJy3DMP FvSA== X-Gm-Message-State: AOAM533BDPrA8jgASnSifETXFarhRWoLG0qJIhnNb+vlT10Mn+vU2l31 wAK6U9pnf9v5G+qR62uI8q1E8l0nBaYSIXe6pE5crg== X-Google-Smtp-Source: ABdhPJwvRGSfjW+BQVoBdbJMFI3SGm9Mmrx5BjmO46SgVdtXKMaXbKuqMg2V/NxgtH3LSYo3AH35O1L6keWUVQOdZgw= X-Received: by 2002:a17:90b:3806:b0:1e2:adc5:d192 with SMTP id mq6-20020a17090b380600b001e2adc5d192mr1581921pjb.223.1654894912870; Fri, 10 Jun 2022 14:01:52 -0700 (PDT) MIME-Version: 1.0 References: <20220524205646.1798325-1-vannapurve@google.com> <20220610010510.vlxax4g3sgvsmoly@amd.com> In-Reply-To: <20220610010510.vlxax4g3sgvsmoly@amd.com> From: Vishal Annapurve Date: Fri, 10 Jun 2022 14:01:41 -0700 Message-ID: Subject: Re: [RFC V1 PATCH 0/3] selftests: KVM: sev: selftests for fd-based approach of supporting private memory To: Michael Roth Cc: x86 , kvm list , LKML , linux-kselftest@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , dave.hansen@linux.intel.com, "H . Peter Anvin" , shuah , yang.zhong@intel.com, drjones@redhat.com, Ricardo Koller , Aaron Lewis , wei.w.wang@intel.com, "Kirill A . Shutemov" , Jonathan Corbet , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Chao Peng , Yu Zhang , Jun Nakajima , Dave Hansen , Quentin Perret , Steven Price , Andi Kleen , David Hildenbrand , Andy Lutomirski , Vlastimil Babka , Marc Orr , Erdem Aktas , Peter Gonda , "Nikunj A. Dadhania" , Sean Christopherson , Austin Diviness , maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, Mingwei Zhang , bgardon@google.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org .... > > I ended up adding a KVM_CAP_UNMAPPED_PRIVATE_MEM to distinguish between the > 2 modes. With UPM-mode enabled it basically means KVM can/should enforce that > all private guest pages are backed by private memslots, and enable a couple > platform-specific hooks to handle MAP_GPA_RANGE, and queries from MMU on > whether or not an NPT fault is for a private page or not. SEV uses these hooks > to manage its encryption bitmap, and uses that bitmap as the authority on > whether or not a page is encrypted. SNP uses GHCB page-state-change requests > so MAP_GPA_RANGE is a no-op there, but uses the MMU hook to indicate whether a > fault is private based on the page fault flags. > > When UPM-mode isn't enabled, MAP_GPA_RANGE just gets passed on to userspace > as before, and platform-specific hooks above are no-ops. That's the mode > your SEV self-tests ran in initially. I added a test that runs the > PrivateMemoryPrivateAccess in UPM-mode, where the guest's OS memory is also > backed by private memslot and the platform hooks are enabled, and things seem > to still work okay there. I only added a UPM-mode test for the > PrivateMemoryPrivateAccess one though so far. I suppose we'd want to make > sure it works exactly as it did with UPM-mode disabled, but I don't see why > it wouldn't. Thanks Michael for the update. Yeah, using the bitmap to track private/shared-ness of gfn ranges should be the better way to go as compared to the limited approach I used to just track a single contiguous pfn range. I spent some time in getting the SEV/SEV-ES priv memfd selftests to execute from private fd as well and ended up doing similar changes as part of the github tree: https://github.com/vishals4gh/linux/commits/sev_upm_selftests_rfc_v2. > > But probably worth having some discussion on how exactly we should define this > mode, and whether that meshes with what TDX folks are planning. > > I've pushed my UPM-mode selftest additions here: > https://github.com/mdroth/linux/commits/sev_upm_selftests_rfc_v1_upmmode > > And the UPM SEV/SEV-SNP tree I'm running them against (DISCLAIMER: EXPERIMENTAL): > https://github.com/mdroth/linux/commits/pfdv6-on-snpv6-upm1 > Thanks for the references here. This helps get a clear picture around the status of priv memfd integration with Sev-SNP VMs and this work will be the base of future SEV specific priv memfd selftest patches as things get more stable. I see usage of pwrite to populate initial private memory contents. Does it make sense to have SEV_VM_LAUNCH_UPDATE_DATA handle the private fd population as well? I tried to prototype it via: https://github.com/vishals4gh/linux/commit/c85ee15c8bf9d5d43be9a34898176e8230a3b680# as I got this suggestion from Erdem Aktas(erdemaktas@google) while discussing about executing guest code from private fd. Apart from the aspects I might not be aware of, this can have performance overhead depending on the initial Guest UEFI boot memory requirements. But this can allow the userspace VMM to keep most of the guest vm boot memory setup the same and avoid changing the host kernel to allow private memfd writes from userspace. Regards, Vishal