From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75386CCA473 for ; Fri, 10 Jun 2022 00:11:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238951AbiFJALj (ORCPT ); Thu, 9 Jun 2022 20:11:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238404AbiFJALh (ORCPT ); Thu, 9 Jun 2022 20:11:37 -0400 Received: from mail-oa1-x2e.google.com (mail-oa1-x2e.google.com [IPv6:2001:4860:4864:20::2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CDD34C41A for ; Thu, 9 Jun 2022 17:11:35 -0700 (PDT) Received: by mail-oa1-x2e.google.com with SMTP id 586e51a60fabf-f2cbceefb8so1381369fac.11 for ; Thu, 09 Jun 2022 17:11:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lN+fOqwv9bBcDqIpukqHItG5mOqsK9h2lCdbXTf1cPM=; b=DayT0SdBt1ymTKyry+8RI8dWvFXfKXAgjmO8VhNnB1yoDZK63YuzcCUTHD3X1Z3+t5 KxmGEbccYszaa9LA1rH+uPvXkUot/k7y+eYJ5dVw1vcNDUZMV9ll/y/F+7N08dr0Wzcn baBzETzPA15Z2tn9QgoF5S7+WOA0j8RJa/fH3l+HKIsIdEIRN75ENG6RuJQTaJE4f7s+ tdQ6ePOMDHk+W48X2uZ7KmmqPYEcwu7pMK65fJXjpYRLWoV5X97j/IkcgrkGUpYpxwIB E5LIO66muSm/XASIShQU2K4+YUS/ejVNHYtSotPk8eFHXkkvvETkJhzMm8POKoBM9Wup sSZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lN+fOqwv9bBcDqIpukqHItG5mOqsK9h2lCdbXTf1cPM=; b=HhNcDLS1WJNwhEWuMvpDC6/jPYCCRuCX8+Q61MJlLxxUrhXXksYhOd2cq+LTGYN845 iIebC1xdUfM1Oegql52AYF8iovVNWAraVt41FvczV+T6Yt/FIfFZSUwWG+K4KcRfkQc8 fJvP2zruFW9uvtZMOIaHArQizT9P3W39FkB3cmAIZQ4Y9+hnKq5YZ9yGugVT6+YkFnPx FZHssorcJePzxsQfIKl0a/Eckqsq3ysmjDyWzco6lMlv0gTq4+PHlERwzesfi9YoLCX6 wPcJjNbflxfFkoQVOuj/tGVyb9P2t+NgdrzbNd1scYPUdWQ5Al1fe6Q2GWnuqmhxjnBz lasQ== X-Gm-Message-State: AOAM5321DqiyHO4RdW9yCJC3gCKsUQenuLWi3YMPLSFKv7eVipV3JWAM 98t5STYbJgxkzATAq/7xznnV6UoukY4qfuQhE7Xr7A== X-Google-Smtp-Source: ABdhPJzdaT0C30mX9Im9rqGsL6x1Rmi0294lLn68XE92+pqgxWwSN0AqB6f3CMYnEGDUJ7ZH1btvuZX8xfTHzXelZbU= X-Received: by 2002:a05:6870:b616:b0:e2:f8bb:5eb with SMTP id cm22-20020a056870b61600b000e2f8bb05ebmr3338490oab.218.1654819893261; Thu, 09 Jun 2022 17:11:33 -0700 (PDT) MIME-Version: 1.0 References: <20220519153713.819591-1-chao.p.peng@linux.intel.com> <20220607065749.GA1513445@chaop.bj.intel.com> <20220608021820.GA1548172@chaop.bj.intel.com> In-Reply-To: <20220608021820.GA1548172@chaop.bj.intel.com> From: Marc Orr Date: Thu, 9 Jun 2022 17:11:21 -0700 Message-ID: Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory To: Chao Peng Cc: Vishal Annapurve , kvm list , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86 , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Yu Zhang , "Kirill A . Shutemov" , Andy Lutomirski , Jun Nakajima , Dave Hansen , Andi Kleen , David Hildenbrand , aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Tue, Jun 7, 2022 at 7:22 PM Chao Peng wrote: > > On Tue, Jun 07, 2022 at 05:55:46PM -0700, Marc Orr wrote: > > On Tue, Jun 7, 2022 at 12:01 AM Chao Peng wrote: > > > > > > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote: > > > > > > > > > > Private memory map/unmap and conversion > > > > > --------------------------------------- > > > > > Userspace's map/unmap operations are done by fallocate() ioctl on the > > > > > backing store fd. > > > > > - map: default fallocate() with mode=0. > > > > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE. > > > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap > > > > > secondary MMU page tables. > > > > > > > > > .... > > > > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6 > > > > > > > > > > An example QEMU command line for TDX test: > > > > > -object tdx-guest,id=tdx \ > > > > > -object memory-backend-memfd-private,id=ram1,size=2G \ > > > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 > > > > > > > > > > > > > There should be more discussion around double allocation scenarios > > > > when using the private fd approach. A malicious guest or buggy > > > > userspace VMM can cause physical memory getting allocated for both > > > > shared (memory accessible from host) and private fds backing the guest > > > > memory. > > > > Userspace VMM will need to unback the shared guest memory while > > > > handling the conversion from shared to private in order to prevent > > > > double allocation even with malicious guests or bugs in userspace VMM. > > > > > > I don't know how malicious guest can cause that. The initial design of > > > this serie is to put the private/shared memory into two different > > > address spaces and gives usersapce VMM the flexibility to convert > > > between the two. It can choose respect the guest conversion request or > > > not. > > > > For example, the guest could maliciously give a device driver a > > private page so that a host-side virtual device will blindly write the > > private page. > > With this patch series, it's actually even not possible for userspace VMM > to allocate private page by a direct write, it's basically unmapped from > there. If it really wants to, it should so something special, by intention, > that's basically the conversion, which we should allow. I think Vishal did a better job to explain this scenario in his last reply than I did. > > > It's possible for a usrspace VMM to cause double allocation if it fails > > > to call the unback operation during the conversion, this may be a bug > > > or not. Double allocation may not be a wrong thing, even in conception. > > > At least TDX allows you to use half shared half private in guest, means > > > both shared/private can be effective. Unbacking the memory is just the > > > current QEMU implementation choice. > > > > Right. But the idea is that this patch series should accommodate all > > of the CVM architectures. Or at least that's what I know was > > envisioned last time we discussed this topic for SNP [*]. > > AFAICS, this series should work for both TDX and SNP, and other CVM > architectures. I don't see where TDX can work but SNP cannot, or I > missed something here? Agreed. I was just responding to the "At least TDX..." bit. Sorry for any confusion. > > > > Regardless, it's important to ensure that the VM respects its memory > > budget. For example, within Google, we run VMs inside of containers. > > So if we double allocate we're going to OOM. This seems acceptable for > > an early version of CVMs. But ultimately, I think we need a more > > robust way to ensure that the VM operates within its memory container. > > Otherwise, the OOM is going to be hard to diagnose and distinguish > > from a real OOM. > > Thanks for bringing this up. But in my mind I still think userspace VMM > can do and it's its responsibility to guarantee that, if that is hard > required. By design, userspace VMM is the decision-maker for page > conversion and has all the necessary information to know which page is > shared/private. It also has the necessary knobs to allocate/free the > physical pages for guest memory. Definitely, we should make userspace > VMM more robust. Vishal and Sean did a better job to articulate the concern in their most recent replies.