From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F059C43334 for ; Tue, 14 Jun 2022 21:00:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344334AbiFNVAA (ORCPT ); Tue, 14 Jun 2022 17:00:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345133AbiFNU74 (ORCPT ); Tue, 14 Jun 2022 16:59:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB61B50039 for ; Tue, 14 Jun 2022 13:59:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 60960616AB for ; Tue, 14 Jun 2022 20:59:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BCF71C36B04 for ; Tue, 14 Jun 2022 20:59:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655240394; bh=9wZk9MRpUMnrGZbQcUwmuX/7Qeydrlw5OKktQernftE=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=sIwDRA2Vra2dKTCg1DafT8ciCfbsFmCy/4ggHE0w4hDjeAbryOsCZvHwd7s4LYrJC RASdznOBdSMKuP1HZx3N5LDRcRg5QZzJX/ZhmraXzSOpJV+Dnq1Fq8dNFlzAIPdlKj uozMksen4aiuIii2IV54Nm3xvOzJjHHbLOVgd6M8BPv8mQc01tS4EqZLTesOG9Ebcu X1LTLzO7C4Z9x3sS1MfNuors/XgMUqnnuflvEWXW3BZcA3JWBdxKrUGmnsBU1UYSw5 gySRzGesBFaWCOLwCchPoQL+xtWWai64eXMIZX5iM0zl4ssRfXthevO9q8WhM/q44E Sq/WGvjZxKdfw== Received: by mail-lf1-f47.google.com with SMTP id a29so15866928lfk.2 for ; Tue, 14 Jun 2022 13:59:54 -0700 (PDT) X-Gm-Message-State: AJIora8qVEDsgHdpWRIcDAxJATM6n5Hc9h8/HYkfIN+DfpGftb6UxrJx QpNXeLQRkWb5kGlZKp3KHlsvvk5yeFV2SLp0HKtIAA== X-Google-Smtp-Source: AGRyM1sCbLW6pqrjCLadGc9nlr9/fXFl71CEbENKqHrt6kXBGBmrjtgceQq2RtPH/Uy7fk0amoBaU8MzH9G8H+xSOgs= X-Received: by 2002:ac2:57c4:0:b0:479:7d52:a5a2 with SMTP id k4-20020ac257c4000000b004797d52a5a2mr4201442lfo.173.1655240392630; Tue, 14 Jun 2022 13:59:52 -0700 (PDT) MIME-Version: 1.0 References: <20220519153713.819591-1-chao.p.peng@linux.intel.com> <20220607065749.GA1513445@chaop.bj.intel.com> <20220608021820.GA1548172@chaop.bj.intel.com> <20220614072800.GB1783435@chaop.bj.intel.com> In-Reply-To: From: Andy Lutomirski Date: Tue, 14 Jun 2022 13:59:41 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory To: Sean Christopherson Cc: Andy Lutomirski , Chao Peng , Vishal Annapurve , Marc Orr , kvm list , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86 , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Yu Zhang , "Kirill A . Shutemov" , Jun Nakajima , Dave Hansen , Andi Kleen , David Hildenbrand , aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson wrote: > > On Tue, Jun 14, 2022, Andy Lutomirski wrote: > > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng wrote: > > > > > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > > > > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving > > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > > > > kill from the host is typically considered a _host_ issue and will be treated as > > > > a missed SLO. > > > > > > > > An idea for handling this in the kernel without too much complexity would be to > > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > > > > still work, but writes to previously unreserved/unallocated memory would get a > > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > > > > unintentional allocations without having to coordinate unmapping/remapping across > > > > multiple processes. > > > > > > Since this is mainly for shared memory and the motivation is catching > > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark > > > those range backed by private fd as PROT_NONE during the conversion so > > > subsequence misbehaved accesses will be blocked instead of causing double > > > allocation silently. > > PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap(). > > > This patch series is fairly close to implementing a rather more > > efficient solution. I'm not familiar enough with hypervisor userspace > > to really know if this would work, but: > > > > What if shared guest memory could also be file-backed, either in the > > same fd or with a second fd covering the shared portion of a memslot? > > This would allow changes to the backing store (punching holes, etc) to > > be some without mmap_lock or host-userspace TLB flushes? Depending on > > what the guest is doing with its shared memory, userspace might need > > the memory mapped or it might not. > > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue, > unless I'm misreading code, is that punching a hole in the shared memory backing > store doesn't prevent reallocating that hole on fault, i.e. a helper process that > keeps a valid mapping of guest shared memory can silently fill the hole. > > What we're hoping to achieve is a way to prevent allocating memory without a very > explicit action from userspace, e.g. fallocate(). Ah, I misunderstood. I thought your goal was to mmap it and prevent page faults from allocating. It is indeed the case (and has been since before quite a few of us were born) that a hole in a sparse file is logically just a bunch of zeros. A way to make a file for which a hole is an actual hole seems like it would solve this problem nicely. It could also be solved more specifically for KVM by making sure that the private/shared mode that userspace programs is strict enough to prevent accidental allocations -- if a GPA is definitively private, shared, neither, or (potentially, on TDX only) both, then a page that *isn't* shared will never be accidentally allocated by KVM. If the shared backing is not mmapped, it also won't be accidentally allocated by host userspace on a stray or careless write. --Andy