From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E716BCCA47E for ; Thu, 9 Jun 2022 20:29:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344304AbiFIU3Q (ORCPT ); Thu, 9 Jun 2022 16:29:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344324AbiFIU3O (ORCPT ); Thu, 9 Jun 2022 16:29:14 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BF3926EEB6 for ; Thu, 9 Jun 2022 13:29:12 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id o6so16052566plg.2 for ; Thu, 09 Jun 2022 13:29:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=C4/yJYyLya7rE6Jieokr56CujDpfPfuLcGNiEECS+QU=; b=CjppovKB2YLi+/umkUgs7TGbUGFHqjbhz+/vuQpURnYcTn7Y/XOYbzw0Ykj8khDf1D lU6118PzbLcDnYEohwwuQpWRqLnFFor88HMkhxJnTWzVFPYmlDCWI4t25w0FcOY9LsJ3 Xx+fRvY97K5Q4b9fyl2wIJsNDu8BFxuzsJC2MBkEM8rOua9Xl2avo9S7S1w8ygcb6WQp l07wjz7O+ipoSddpByuvOEx8LLG0at/WFa44rE+IaodvVFqdep6gbL1AbyDvvOmSNBeS GcNHdTH1imkTwWGYIM5B+u73KBCppBXzzkigAH/N4DrtNRxqLpENGyWxu6onk+j5isHM 5lXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=C4/yJYyLya7rE6Jieokr56CujDpfPfuLcGNiEECS+QU=; b=ipVQyZOC3Mbv7vh9xfrgPIy30nPiP6xJ2So6RTx7KQzVsunFn/4s2Jo6urRF1nvw+T uqicxesRRI623RstJIRjiFd9CW2qxMkToVsuXgvbZ4sl0K49QoHtX57ZLyGjEnUg3h6t 1WZN814EPsH7FQl960tQ/3Jqqn1I6N0XvqABL+w6eQcOgHdEP9q/E3FzB9xT4fH1zK3Q auKzbBxbzVY9PNq8tZ8G9dIJAARtx82Pjon6VIwJkCOCRMxB0bzitIgUAI0EW9YJEPlg weETmcpd4fZWtf0fkZKEnyCLCkI0xS+o+6mFLmOPg/vSlvqlAmsuUaoymHmao9DGKbRu TJKw== X-Gm-Message-State: AOAM533EWJDT316ebTS/eQ4P5lfpM99Ryzt+acRr62BiGbsYWMZuIRXk M+0jNy7Tk6FTo6CPuy+aL1TTug== X-Google-Smtp-Source: ABdhPJx0nBU6Y+TUh04OO6/IROXMWJ7hUc76Ud455mDwfSNlb8nAgIObkwH4NgXB0NSx9cPem36m4g== X-Received: by 2002:a17:90a:b284:b0:1e3:826b:d11d with SMTP id c4-20020a17090ab28400b001e3826bd11dmr5147277pjr.79.1654806551448; Thu, 09 Jun 2022 13:29:11 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id e3-20020a17090301c300b0016511314b94sm17748369plh.159.2022.06.09.13.29.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jun 2022 13:29:10 -0700 (PDT) Date: Thu, 9 Jun 2022 20:29:06 +0000 From: Sean Christopherson To: Vishal Annapurve Cc: Chao Peng , Marc Orr , kvm list , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86 , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Yu Zhang , "Kirill A . Shutemov" , Andy Lutomirski , Jun Nakajima , Dave Hansen , Andi Kleen , David Hildenbrand , aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: References: <20220519153713.819591-1-chao.p.peng@linux.intel.com> <20220607065749.GA1513445@chaop.bj.intel.com> <20220608021820.GA1548172@chaop.bj.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Wed, Jun 08, 2022, Vishal Annapurve wrote: > ... > > With this patch series, it's actually even not possible for userspace VMM > > to allocate private page by a direct write, it's basically unmapped from > > there. If it really wants to, it should so something special, by intention, > > that's basically the conversion, which we should allow. > > > > A VM can pass GPA backed by private pages to userspace VMM and when > Userspace VMM accesses the backing hva there will be pages allocated > to back the shared fd causing 2 sets of pages backing the same guest > memory range. > > > Thanks for bringing this up. But in my mind I still think userspace VMM > > can do and it's its responsibility to guarantee that, if that is hard > > required. That was my initial reaction too, but there are unfortunate side effects to punting this to userspace. > By design, userspace VMM is the decision-maker for page > > conversion and has all the necessary information to know which page is > > shared/private. It also has the necessary knobs to allocate/free the > > physical pages for guest memory. Definitely, we should make userspace > > VMM more robust. > > Making Userspace VMM more robust to avoid double allocation can get > complex, it will have to keep track of all in-use (by Userspace VMM) > shared fd memory to disallow conversion from shared to private and > will have to ensure that all guest supplied addresses belong to shared > GPA ranges. IMO, the complexity argument isn't sufficient justfication for introducing new kernel functionality. If multiple processes are accessing guest memory then there already needs to be some amount of coordination, i.e. it can't be _that_ complex. My concern with forcing userspace to fully handle unmapping shared memory is that it may lead to additional performance overhead and/or noisy neighbor issues, even if all guests are well-behaved. Unnmapping arbitrary ranges will fragment the virtual address space and consume more memory for all the result VMAs. The extra memory consumption isn't that big of a deal, and it will be self-healing to some extent as VMAs will get merged when the holes are filled back in (if the guest converts back to shared), but it's still less than desirable. More concerning is having to take mmap_lock for write for every conversion, which is very problematic for configurations where a single userspace process maps memory belong to multiple VMs. Unmapping and remapping on every conversion will create a bottleneck, especially if a VM has sub-optimal behavior and is converting pages at a high rate. One argument is that userspace can simply rely on cgroups to detect misbehaving guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM kill from the host is typically considered a _host_ issue and will be treated as a missed SLO. An idea for handling this in the kernel without too much complexity would be to add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would still work, but writes to previously unreserved/unallocated memory would get a SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent unintentional allocations without having to coordinate unmapping/remapping across multiple processes.