From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD4E1C433EF for ; Tue, 14 Jun 2022 17:38:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343923AbiFNRh7 (ORCPT ); Tue, 14 Jun 2022 13:37:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241459AbiFNRhx (ORCPT ); Tue, 14 Jun 2022 13:37:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83DBBC6A for ; Tue, 14 Jun 2022 10:37:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 003696173F for ; Tue, 14 Jun 2022 17:37:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59EEBC36AFF for ; Tue, 14 Jun 2022 17:37:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655228271; bh=IfvLnQR788L8QWGOY03Cidk5AO0MFCOejaXVfQOMPCw=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=rVsqTEMCOYtQg0EhaeIyrGXi4SPR+xCja/Zzrm4/rXKs2rHudfxst4/iDfEc7IQHK 231UQkTDY6d6nmdAlt1kRM+zRgLLBDu035jXo5xVowLng3mEh3oiXDy8mQR20Hacmd LnzQlaeG3xrE2iaFi/ZdceqCE70b9VENcDWjU+WHKEa2HJsyqUZj0vCI94Pk49Z42u h/RWzCu6j4a9w8yEsj7rK5e9uiDjc2EiFUc+avH0qZI0dyuzeA/z91zXgiRd7yeSl3 8mOVqXBg2pv8zvUY79qtaym4zlYLKvBcSVb7us+LoO+lJ2erExWSiAMs/ARQBPGhP9 Q8CDnPjgM2DNQ== Received: by mail-ej1-f42.google.com with SMTP id y19so18573839ejq.6 for ; Tue, 14 Jun 2022 10:37:51 -0700 (PDT) X-Gm-Message-State: AOAM5315qJkwrzO/p7dfciH8GW0XlovCBm+oYRjIKwnV6+Us0whkNp3n Hb//RJhO33vPeZ9o9q7ojPWKYv0BT/Wj26/lb1O7nQ== X-Google-Smtp-Source: ABdhPJww+lYVXFvUzPwylC2186LPMaNfdFPXgghAh2gRgIANvpNaY/HFZOJeSAxuhp7fvjON6/ZRor4QvsJ9snnK8jU= X-Received: by 2002:a17:906:2298:b0:715:7f3d:32ec with SMTP id p24-20020a170906229800b007157f3d32ecmr5255068eja.538.1655228269361; Tue, 14 Jun 2022 10:37:49 -0700 (PDT) MIME-Version: 1.0 References: <20220519153713.819591-1-chao.p.peng@linux.intel.com> <20220607065749.GA1513445@chaop.bj.intel.com> <20220608021820.GA1548172@chaop.bj.intel.com> <20220614072800.GB1783435@chaop.bj.intel.com> In-Reply-To: <20220614072800.GB1783435@chaop.bj.intel.com> From: Andy Lutomirski Date: Tue, 14 Jun 2022 10:37:37 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory To: Chao Peng Cc: Sean Christopherson , Vishal Annapurve , Marc Orr , kvm list , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86 , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Yu Zhang , "Kirill A . Shutemov" , Andy Lutomirski , Jun Nakajima , Dave Hansen , Andi Kleen , David Hildenbrand , aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 14, 2022 at 12:32 AM Chao Peng wrote: > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > > kill from the host is typically considered a _host_ issue and will be treated as > > a missed SLO. > > > > An idea for handling this in the kernel without too much complexity would be to > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > > still work, but writes to previously unreserved/unallocated memory would get a > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > > unintentional allocations without having to coordinate unmapping/remapping across > > multiple processes. > > Since this is mainly for shared memory and the motivation is catching > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark > those range backed by private fd as PROT_NONE during the conversion so > subsequence misbehaved accesses will be blocked instead of causing double > allocation silently. This patch series is fairly close to implementing a rather more efficient solution. I'm not familiar enough with hypervisor userspace to really know if this would work, but: What if shared guest memory could also be file-backed, either in the same fd or with a second fd covering the shared portion of a memslot? This would allow changes to the backing store (punching holes, etc) to be some without mmap_lock or host-userspace TLB flushes? Depending on what the guest is doing with its shared memory, userspace might need the memory mapped or it might not. --Andy