All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: "Xiaoyao Li" <xiaoyao.li@intel.com>,
	"Marc Zyngier" <maz@kernel.org>,
	"Oliver Upton" <oliver.upton@linux.dev>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Michael Ellerman" <mpe@ellerman.id.au>,
	"Anup Patel" <anup@brainfault.org>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Albert Ou" <aou@eecs.berkeley.edu>,
	"Alexander Viro" <viro@zeniv.linux.org.uk>,
	"Christian Brauner" <brauner@kernel.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	"Xu Yilun" <yilun.xu@intel.com>,
	"Chao Peng" <chao.p.peng@linux.intel.com>,
	"Fuad Tabba" <tabba@google.com>,
	"Jarkko Sakkinen" <jarkko@kernel.org>,
	"Anish Moorthy" <amoorthy@google.com>,
	"David Matlack" <dmatlack@google.com>,
	"Yu Zhang" <yu.c.zhang@linux.intel.com>,
	"Isaku Yamahata" <isaku.yamahata@intel.com>,
	"Mickaël Salaün" <mic@digikod.net>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Vishal Annapurve" <vannapurve@google.com>,
	"Ackerley Tng" <ackerleytng@google.com>,
	"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
	"David Hildenbrand" <david@redhat.com>,
	"Quentin Perret" <qperret@google.com>,
	"Michael Roth" <michael.roth@amd.com>,
	Wang <wei.w.wang@intel.com>,
	"Liam Merwick" <liam.merwick@oracle.com>,
	"Isaku Yamahata" <isaku.yamahata@gmail.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH v13 17/35] KVM: Add transparent hugepage support for dedicated guest memory
Date: Thu, 2 Nov 2023 16:46:42 +0100	[thread overview]
Message-ID: <CABgObfa=DH7FySBviF63OS9sVog_wt-AqYgtUAGKqnY5Bizivw@mail.gmail.com> (raw)
In-Reply-To: <ZUPCWfO1iO77-KDA@google.com>

On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson <seanjc@google.com> wrote:
> Actually, looking that this again, there's not actually a hard dependency on THP.
> A THP-enabled kernel _probably_  gives a higher probability of using hugepages,
> but mostly because THP selects COMPACTION, and I suppose because using THP for
> other allocations reduces overall fragmentation.

Yes, that's why I didn't even bother enabling it unless THP is
enabled, but it makes even more sense to just try.

> So rather than honor KVM_GUEST_MEMFD_ALLOW_HUGEPAGE iff THP is enabled, I think
> we should do the below (I verified KVM can create hugepages with THP=n).  We'll
> need another capability, but (a) we probably should have that anyways and (b) it
> provides a cleaner path to adding PUD-sized hugepage support in the future.

I wonder if we need KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE though. This
should be a generic kernel API and in fact the sizes are available in
a not-so-friendly format in /sys/kernel/mm/hugepages.

We should just add /sys/kernel/mm/hugepages/sizes that contains
"2097152 1073741824" on x86 (only the former if 1G pages are not
supported).

Plus: is this the best API if we need something else for 1G pages?

Let's drop *this* patch and proceed incrementally. (Again, this is
what I want to do with this final review: identify places that are
stil sticky, and don't let them block the rest).

Coincidentially we have an open spot next week at plumbers. Let's
extend Fuad's section to cover more guestmem work.

Paolo

> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> index c15de9852316..c9f449718fce 100644
> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> @@ -201,6 +201,10 @@ int main(int argc, char *argv[])
>
>         TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
>
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE) && thp_configured())
> +               TEST_ASSERT_EQ(get_trans_hugepagesz(),
> +                              kvm_check_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE));
> +
>         page_size = getpagesize();
>         total_size = page_size * 4;
>
> diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> index be311944e90a..245901587ed2 100644
> --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> @@ -396,7 +396,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
>
>         vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
>
> -       if (backing_src_can_be_huge(src_type))
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE))
>                 memfd_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         else
>                 memfd_flags = 0;
>
> --
> From: Sean Christopherson <seanjc@google.com>
> Date: Wed, 25 Oct 2023 16:26:41 -0700
> Subject: [PATCH] KVM: Add best-effort hugepage support for dedicated guest
>  memory
>
> Extend guest_memfd to allow backing guest memory with hugepages.  For now,
> make hugepage utilization best-effort, i.e. fall back to non-huge mappings
> if a hugepage can't be allocated.  Guaranteeing hugepages would require a
> dedicated memory pool and significantly more complexity and churn..
>
> Require userspace to opt-in via a flag even though it's unlikely real use
> cases will ever want to use order-0 pages, e.g. to give userspace a safety
> valve in case hugepage support is buggy, and to allow for easier testing
> of both paths.
>
> Do not take a dependency on CONFIG_TRANSPARENT_HUGEPAGE, as THP enabling
> primarily deals with userspace page tables, which are explicitly not in
> play for guest_memfd.  Selecting THP does make obtaining hugepages more
> likely, but only because THP selects CONFIG_COMPACTION.  Don't select
> CONFIG_COMPACTION either, because again it's not a hard dependency.
>
> For simplicity, require the guest_memfd size to be a multiple of the
> hugepage size, e.g. so that KVM doesn't need to do bounds checking when
> deciding whether or not to allocate a huge folio.
>
> When reporting the max order when KVM gets a pfn from guest_memfd, force
> order-0 pages if the hugepage is not fully contained by the memslot
> binding, e.g. if userspace requested hugepages but punches a hole in the
> memslot bindings in order to emulate x86's VGA hole.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  Documentation/virt/kvm/api.rst | 17 +++++++++
>  include/uapi/linux/kvm.h       |  3 ++
>  virt/kvm/guest_memfd.c         | 69 +++++++++++++++++++++++++++++-----
>  virt/kvm/kvm_main.c            |  4 ++
>  4 files changed, 84 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index e82c69d5e755..ccdd5413920d 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6176,6 +6176,8 @@ and cannot be resized  (guest_memfd files do however support PUNCH_HOLE).
>         __u64 reserved[6];
>    };
>
> +  #define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  Conceptually, the inode backing a guest_memfd file represents physical memory,
>  i.e. is coupled to the virtual machine as a thing, not to a "struct kvm".  The
>  file itself, which is bound to a "struct kvm", is that instance's view of the
> @@ -6192,6 +6194,12 @@ most one mapping per page, i.e. binding multiple memory regions to a single
>  guest_memfd range is not allowed (any number of memory regions can be bound to
>  a single guest_memfd file, but the bound ranges must not overlap).
>
> +If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set in flags, KVM will attempt to allocate
> +and map PMD-size hugepages for the guest_memfd file.  This is currently best
> +effort.  If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set, size must be aligned to at
> +least the size reported by KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE (which also
> +enumerates support for KVM_GUEST_MEMFD_ALLOW_HUGEPAGE).
> +
>  See KVM_SET_USER_MEMORY_REGION2 for additional details.
>
>  5. The kvm_run structure
> @@ -8639,6 +8647,15 @@ block sizes is exposed in KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES as a
>  64-bit bitmap (each bit describing a block size). The default value is
>  0, to disable the eager page splitting.
>
> +
> +8.41 KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE
> +------------------------------------------
> +
> +This is an information-only capability that returns guest_memfd's hugepage size
> +for PMD hugepages.  Returns '0' if guest_memfd is not supported, or if KVM
> +doesn't support creating hugepages for guest_memfd.  Note, guest_memfd doesn't
> +currently support PUD-sized hugepages.
> +
>  9. Known KVM API problems
>  =========================
>
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 25caee8d1a80..b78d0e3bf22a 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1217,6 +1217,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_MEMORY_FAULT_INFO 231
>  #define KVM_CAP_MEMORY_ATTRIBUTES 232
>  #define KVM_CAP_GUEST_MEMFD 233
> +#define KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE 234
>
>  #ifdef KVM_CAP_IRQ_ROUTING
>
> @@ -2303,4 +2304,6 @@ struct kvm_create_guest_memfd {
>         __u64 reserved[6];
>  };
>
> +#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  #endif /* __LINUX_KVM_H */
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 98a12da80214..31b5e94d461a 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -13,14 +13,44 @@ struct kvm_gmem {
>         struct list_head entry;
>  };
>
> +#define NR_PAGES_PER_PMD (1 << PMD_ORDER)
> +
> +static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index)
> +{
> +       unsigned long huge_index = round_down(index, NR_PAGES_PER_PMD);
> +       unsigned long flags = (unsigned long)inode->i_private;
> +       struct address_space *mapping  = inode->i_mapping;
> +       gfp_t gfp = mapping_gfp_mask(mapping);
> +       struct folio *folio;
> +
> +       if (!(flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE))
> +               return NULL;
> +
> +       if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT,
> +                                  (huge_index + NR_PAGES_PER_PMD - 1) << PAGE_SHIFT))
> +               return NULL;
> +
> +       folio = filemap_alloc_folio(gfp, PMD_ORDER);
> +       if (!folio)
> +               return NULL;
> +
> +       if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
> +               folio_put(folio);
> +               return NULL;
> +       }
> +       return folio;
> +}
> +
>  static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
>  {
>         struct folio *folio;
>
> -       /* TODO: Support huge pages. */
> -       folio = filemap_grab_folio(inode->i_mapping, index);
> -       if (IS_ERR_OR_NULL(folio))
> -               return NULL;
> +       folio = kvm_gmem_get_huge_folio(inode, index);
> +       if (!folio) {
> +               folio = filemap_grab_folio(inode->i_mapping, index);
> +               if (IS_ERR_OR_NULL(folio))
> +                       return NULL;
> +       }
>
>         /*
>          * Use the up-to-date flag to track whether or not the memory has been
> @@ -373,6 +403,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>         inode->i_mode |= S_IFREG;
>         inode->i_size = size;
>         mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
> +       mapping_set_large_folios(inode->i_mapping);
>         mapping_set_unmovable(inode->i_mapping);
>         /* Unmovable mappings are supposed to be marked unevictable as well. */
>         WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
> @@ -394,14 +425,18 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>
>  int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
>  {
> +       u64 valid_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         loff_t size = args->size;
>         u64 flags = args->flags;
> -       u64 valid_flags = 0;
>
>         if (flags & ~valid_flags)
>                 return -EINVAL;
>
> -       if (size < 0 || !PAGE_ALIGNED(size))
> +       if (size <= 0 || !PAGE_ALIGNED(size))
> +               return -EINVAL;
> +
> +       if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) &&
> +           !IS_ALIGNED(size, PMD_SIZE))
>                 return -EINVAL;
>
>         return __kvm_gmem_create(kvm, size, flags);
> @@ -501,7 +536,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot)
>  int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>                      gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
>  {
> -       pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
> +       pgoff_t index, huge_index;
>         struct kvm_gmem *gmem;
>         struct folio *folio;
>         struct page *page;
> @@ -514,6 +549,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>
>         gmem = file->private_data;
>
> +       index = gfn - slot->base_gfn + slot->gmem.pgoff;
>         if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) {
>                 r = -EIO;
>                 goto out_fput;
> @@ -533,9 +569,24 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>         page = folio_file_page(folio, index);
>
>         *pfn = page_to_pfn(page);
> -       if (max_order)
> +       if (!max_order)
> +               goto success;
> +
> +       *max_order = compound_order(compound_head(page));
> +       if (!*max_order)
> +               goto success;
> +
> +       /*
> +        * The folio can be mapped with a hugepage if and only if the folio is
> +        * fully contained by the range the memslot is bound to.  Note, the
> +        * caller is responsible for handling gfn alignment, this only deals
> +        * with the file binding.
> +        */
> +       huge_index = ALIGN(index, 1ull << *max_order);
> +       if (huge_index < ALIGN(slot->gmem.pgoff, 1ull << *max_order) ||
> +           huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
>                 *max_order = 0;
> -
> +success:
>         r = 0;
>
>  out_unlock:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5d1a2f1b4e94..0711f2c75667 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4888,6 +4888,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  #ifdef CONFIG_KVM_PRIVATE_MEM
>         case KVM_CAP_GUEST_MEMFD:
>                 return !kvm || kvm_arch_has_private_mem(kvm);
> +       case KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE:
> +               if (kvm && !kvm_arch_has_private_mem(kvm))
> +                       return 0;
> +               return PMD_SIZE;
>  #endif
>         default:
>                 break;
>
> base-commit: fcbef1e5e5d2a60dacac0d16c06ac00bedaefc0f
> --
>


WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: "Xiaoyao Li" <xiaoyao.li@intel.com>,
	"Marc Zyngier" <maz@kernel.org>,
	"Oliver Upton" <oliver.upton@linux.dev>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Michael Ellerman" <mpe@ellerman.id.au>,
	"Anup Patel" <anup@brainfault.org>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Albert Ou" <aou@eecs.berkeley.edu>,
	"Alexander Viro" <viro@zeniv.linux.org.uk>,
	"Christian Brauner" <brauner@kernel.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	"Xu Yilun" <yilun.xu@intel.com>,
	"Chao Peng" <chao.p.peng@linux.intel.com>,
	"Fuad Tabba" <tabba@google.com>,
	"Jarkko Sakkinen" <jarkko@kernel.org>,
	"Anish Moorthy" <amoorthy@google.com>,
	"David Matlack" <dmatlack@google.com>,
	"Yu Zhang" <yu.c.zhang@linux.intel.com>,
	"Isaku Yamahata" <isaku.yamahata@intel.com>,
	"Mickaël Salaün" <mic@digikod.net>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Vishal Annapurve" <vannapurve@google.com>,
	"Ackerley Tng" <ackerleytng@google.com>,
	"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
	"David Hildenbrand" <david@redhat.com>,
	"Quentin Perret" <qperret@google.com>,
	"Michael Roth" <michael.roth@amd.com>,
	Wang <wei.w.wang@intel.com>,
	"Liam Merwick" <liam.merwick@oracle.com>,
	"Isaku Yamahata" <isaku.yamahata@gmail.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH v13 17/35] KVM: Add transparent hugepage support for dedicated guest memory
Date: Thu, 2 Nov 2023 16:46:42 +0100	[thread overview]
Message-ID: <CABgObfa=DH7FySBviF63OS9sVog_wt-AqYgtUAGKqnY5Bizivw@mail.gmail.com> (raw)
In-Reply-To: <ZUPCWfO1iO77-KDA@google.com>

On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson <seanjc@google.com> wrote:
> Actually, looking that this again, there's not actually a hard dependency on THP.
> A THP-enabled kernel _probably_  gives a higher probability of using hugepages,
> but mostly because THP selects COMPACTION, and I suppose because using THP for
> other allocations reduces overall fragmentation.

Yes, that's why I didn't even bother enabling it unless THP is
enabled, but it makes even more sense to just try.

> So rather than honor KVM_GUEST_MEMFD_ALLOW_HUGEPAGE iff THP is enabled, I think
> we should do the below (I verified KVM can create hugepages with THP=n).  We'll
> need another capability, but (a) we probably should have that anyways and (b) it
> provides a cleaner path to adding PUD-sized hugepage support in the future.

I wonder if we need KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE though. This
should be a generic kernel API and in fact the sizes are available in
a not-so-friendly format in /sys/kernel/mm/hugepages.

We should just add /sys/kernel/mm/hugepages/sizes that contains
"2097152 1073741824" on x86 (only the former if 1G pages are not
supported).

Plus: is this the best API if we need something else for 1G pages?

Let's drop *this* patch and proceed incrementally. (Again, this is
what I want to do with this final review: identify places that are
stil sticky, and don't let them block the rest).

Coincidentially we have an open spot next week at plumbers. Let's
extend Fuad's section to cover more guestmem work.

Paolo

> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> index c15de9852316..c9f449718fce 100644
> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> @@ -201,6 +201,10 @@ int main(int argc, char *argv[])
>
>         TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
>
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE) && thp_configured())
> +               TEST_ASSERT_EQ(get_trans_hugepagesz(),
> +                              kvm_check_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE));
> +
>         page_size = getpagesize();
>         total_size = page_size * 4;
>
> diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> index be311944e90a..245901587ed2 100644
> --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> @@ -396,7 +396,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
>
>         vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
>
> -       if (backing_src_can_be_huge(src_type))
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE))
>                 memfd_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         else
>                 memfd_flags = 0;
>
> --
> From: Sean Christopherson <seanjc@google.com>
> Date: Wed, 25 Oct 2023 16:26:41 -0700
> Subject: [PATCH] KVM: Add best-effort hugepage support for dedicated guest
>  memory
>
> Extend guest_memfd to allow backing guest memory with hugepages.  For now,
> make hugepage utilization best-effort, i.e. fall back to non-huge mappings
> if a hugepage can't be allocated.  Guaranteeing hugepages would require a
> dedicated memory pool and significantly more complexity and churn..
>
> Require userspace to opt-in via a flag even though it's unlikely real use
> cases will ever want to use order-0 pages, e.g. to give userspace a safety
> valve in case hugepage support is buggy, and to allow for easier testing
> of both paths.
>
> Do not take a dependency on CONFIG_TRANSPARENT_HUGEPAGE, as THP enabling
> primarily deals with userspace page tables, which are explicitly not in
> play for guest_memfd.  Selecting THP does make obtaining hugepages more
> likely, but only because THP selects CONFIG_COMPACTION.  Don't select
> CONFIG_COMPACTION either, because again it's not a hard dependency.
>
> For simplicity, require the guest_memfd size to be a multiple of the
> hugepage size, e.g. so that KVM doesn't need to do bounds checking when
> deciding whether or not to allocate a huge folio.
>
> When reporting the max order when KVM gets a pfn from guest_memfd, force
> order-0 pages if the hugepage is not fully contained by the memslot
> binding, e.g. if userspace requested hugepages but punches a hole in the
> memslot bindings in order to emulate x86's VGA hole.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  Documentation/virt/kvm/api.rst | 17 +++++++++
>  include/uapi/linux/kvm.h       |  3 ++
>  virt/kvm/guest_memfd.c         | 69 +++++++++++++++++++++++++++++-----
>  virt/kvm/kvm_main.c            |  4 ++
>  4 files changed, 84 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index e82c69d5e755..ccdd5413920d 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6176,6 +6176,8 @@ and cannot be resized  (guest_memfd files do however support PUNCH_HOLE).
>         __u64 reserved[6];
>    };
>
> +  #define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  Conceptually, the inode backing a guest_memfd file represents physical memory,
>  i.e. is coupled to the virtual machine as a thing, not to a "struct kvm".  The
>  file itself, which is bound to a "struct kvm", is that instance's view of the
> @@ -6192,6 +6194,12 @@ most one mapping per page, i.e. binding multiple memory regions to a single
>  guest_memfd range is not allowed (any number of memory regions can be bound to
>  a single guest_memfd file, but the bound ranges must not overlap).
>
> +If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set in flags, KVM will attempt to allocate
> +and map PMD-size hugepages for the guest_memfd file.  This is currently best
> +effort.  If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set, size must be aligned to at
> +least the size reported by KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE (which also
> +enumerates support for KVM_GUEST_MEMFD_ALLOW_HUGEPAGE).
> +
>  See KVM_SET_USER_MEMORY_REGION2 for additional details.
>
>  5. The kvm_run structure
> @@ -8639,6 +8647,15 @@ block sizes is exposed in KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES as a
>  64-bit bitmap (each bit describing a block size). The default value is
>  0, to disable the eager page splitting.
>
> +
> +8.41 KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE
> +------------------------------------------
> +
> +This is an information-only capability that returns guest_memfd's hugepage size
> +for PMD hugepages.  Returns '0' if guest_memfd is not supported, or if KVM
> +doesn't support creating hugepages for guest_memfd.  Note, guest_memfd doesn't
> +currently support PUD-sized hugepages.
> +
>  9. Known KVM API problems
>  =========================
>
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 25caee8d1a80..b78d0e3bf22a 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1217,6 +1217,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_MEMORY_FAULT_INFO 231
>  #define KVM_CAP_MEMORY_ATTRIBUTES 232
>  #define KVM_CAP_GUEST_MEMFD 233
> +#define KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE 234
>
>  #ifdef KVM_CAP_IRQ_ROUTING
>
> @@ -2303,4 +2304,6 @@ struct kvm_create_guest_memfd {
>         __u64 reserved[6];
>  };
>
> +#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  #endif /* __LINUX_KVM_H */
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 98a12da80214..31b5e94d461a 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -13,14 +13,44 @@ struct kvm_gmem {
>         struct list_head entry;
>  };
>
> +#define NR_PAGES_PER_PMD (1 << PMD_ORDER)
> +
> +static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index)
> +{
> +       unsigned long huge_index = round_down(index, NR_PAGES_PER_PMD);
> +       unsigned long flags = (unsigned long)inode->i_private;
> +       struct address_space *mapping  = inode->i_mapping;
> +       gfp_t gfp = mapping_gfp_mask(mapping);
> +       struct folio *folio;
> +
> +       if (!(flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE))
> +               return NULL;
> +
> +       if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT,
> +                                  (huge_index + NR_PAGES_PER_PMD - 1) << PAGE_SHIFT))
> +               return NULL;
> +
> +       folio = filemap_alloc_folio(gfp, PMD_ORDER);
> +       if (!folio)
> +               return NULL;
> +
> +       if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
> +               folio_put(folio);
> +               return NULL;
> +       }
> +       return folio;
> +}
> +
>  static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
>  {
>         struct folio *folio;
>
> -       /* TODO: Support huge pages. */
> -       folio = filemap_grab_folio(inode->i_mapping, index);
> -       if (IS_ERR_OR_NULL(folio))
> -               return NULL;
> +       folio = kvm_gmem_get_huge_folio(inode, index);
> +       if (!folio) {
> +               folio = filemap_grab_folio(inode->i_mapping, index);
> +               if (IS_ERR_OR_NULL(folio))
> +                       return NULL;
> +       }
>
>         /*
>          * Use the up-to-date flag to track whether or not the memory has been
> @@ -373,6 +403,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>         inode->i_mode |= S_IFREG;
>         inode->i_size = size;
>         mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
> +       mapping_set_large_folios(inode->i_mapping);
>         mapping_set_unmovable(inode->i_mapping);
>         /* Unmovable mappings are supposed to be marked unevictable as well. */
>         WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
> @@ -394,14 +425,18 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>
>  int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
>  {
> +       u64 valid_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         loff_t size = args->size;
>         u64 flags = args->flags;
> -       u64 valid_flags = 0;
>
>         if (flags & ~valid_flags)
>                 return -EINVAL;
>
> -       if (size < 0 || !PAGE_ALIGNED(size))
> +       if (size <= 0 || !PAGE_ALIGNED(size))
> +               return -EINVAL;
> +
> +       if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) &&
> +           !IS_ALIGNED(size, PMD_SIZE))
>                 return -EINVAL;
>
>         return __kvm_gmem_create(kvm, size, flags);
> @@ -501,7 +536,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot)
>  int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>                      gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
>  {
> -       pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
> +       pgoff_t index, huge_index;
>         struct kvm_gmem *gmem;
>         struct folio *folio;
>         struct page *page;
> @@ -514,6 +549,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>
>         gmem = file->private_data;
>
> +       index = gfn - slot->base_gfn + slot->gmem.pgoff;
>         if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) {
>                 r = -EIO;
>                 goto out_fput;
> @@ -533,9 +569,24 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>         page = folio_file_page(folio, index);
>
>         *pfn = page_to_pfn(page);
> -       if (max_order)
> +       if (!max_order)
> +               goto success;
> +
> +       *max_order = compound_order(compound_head(page));
> +       if (!*max_order)
> +               goto success;
> +
> +       /*
> +        * The folio can be mapped with a hugepage if and only if the folio is
> +        * fully contained by the range the memslot is bound to.  Note, the
> +        * caller is responsible for handling gfn alignment, this only deals
> +        * with the file binding.
> +        */
> +       huge_index = ALIGN(index, 1ull << *max_order);
> +       if (huge_index < ALIGN(slot->gmem.pgoff, 1ull << *max_order) ||
> +           huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
>                 *max_order = 0;
> -
> +success:
>         r = 0;
>
>  out_unlock:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5d1a2f1b4e94..0711f2c75667 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4888,6 +4888,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  #ifdef CONFIG_KVM_PRIVATE_MEM
>         case KVM_CAP_GUEST_MEMFD:
>                 return !kvm || kvm_arch_has_private_mem(kvm);
> +       case KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE:
> +               if (kvm && !kvm_arch_has_private_mem(kvm))
> +                       return 0;
> +               return PMD_SIZE;
>  #endif
>         default:
>                 break;
>
> base-commit: fcbef1e5e5d2a60dacac0d16c06ac00bedaefc0f
> --
>


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org, "David Hildenbrand" <david@redhat.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	"Chao Peng" <chao.p.peng@linux.intel.com>,
	linux-riscv@lists.infradead.org,
	"Isaku Yamahata" <isaku.yamahata@gmail.com>,
	"Marc Zyngier" <maz@kernel.org>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Xiaoyao Li" <xiaoyao.li@intel.com>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Wang <wei.w.wang@intel.com>, "Fuad Tabba" <tabba@google.com>,
	"Yu Zhang" <yu.c.zhang@linux.intel.com>,
	"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
	"Albert Ou" <aou@eecs.berkeley.edu>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Michael Roth" <michael.roth@amd.com>,
	"Ackerley Tng" <ackerleytng@google.com>,
	"Alexander Viro" <viro@zeniv.linux.org.uk>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
	"Mickaël Salaün" <mic@digikod.net>,
	"Isaku Yamahata" <isaku.yamahata@intel.com>,
	"Christian Brauner" <brauner@kernel.org>,
	"Quentin Perret" <qperret@google.com>,
	"A nup Patel" <anup@brainfault.org>,
	linux-mips@vger.kernel.org,
	"Oliver Upton" <oliver.upton@linux.dev>,
	"David Matlack" <dmatlack@google.com>,
	"Jarkko Sakkinen" <jarkko@kernel.org>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	kvm-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	"Liam Merwick" <liam.merwick@oracle.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Vishal Annapurve" <vannapurve@google.com>,
	linuxppc-dev@lists.ozlabs.org, "Xu Yilun" <yilun.xu@intel.com>,
	"Anish Moorthy" <amoorthy@google.com>
Subject: Re: [PATCH v13 17/35] KVM: Add transparent hugepage support for dedicated guest memory
Date: Thu, 2 Nov 2023 16:46:42 +0100	[thread overview]
Message-ID: <CABgObfa=DH7FySBviF63OS9sVog_wt-AqYgtUAGKqnY5Bizivw@mail.gmail.com> (raw)
In-Reply-To: <ZUPCWfO1iO77-KDA@google.com>

On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson <seanjc@google.com> wrote:
> Actually, looking that this again, there's not actually a hard dependency on THP.
> A THP-enabled kernel _probably_  gives a higher probability of using hugepages,
> but mostly because THP selects COMPACTION, and I suppose because using THP for
> other allocations reduces overall fragmentation.

Yes, that's why I didn't even bother enabling it unless THP is
enabled, but it makes even more sense to just try.

> So rather than honor KVM_GUEST_MEMFD_ALLOW_HUGEPAGE iff THP is enabled, I think
> we should do the below (I verified KVM can create hugepages with THP=n).  We'll
> need another capability, but (a) we probably should have that anyways and (b) it
> provides a cleaner path to adding PUD-sized hugepage support in the future.

I wonder if we need KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE though. This
should be a generic kernel API and in fact the sizes are available in
a not-so-friendly format in /sys/kernel/mm/hugepages.

We should just add /sys/kernel/mm/hugepages/sizes that contains
"2097152 1073741824" on x86 (only the former if 1G pages are not
supported).

Plus: is this the best API if we need something else for 1G pages?

Let's drop *this* patch and proceed incrementally. (Again, this is
what I want to do with this final review: identify places that are
stil sticky, and don't let them block the rest).

Coincidentially we have an open spot next week at plumbers. Let's
extend Fuad's section to cover more guestmem work.

Paolo

> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> index c15de9852316..c9f449718fce 100644
> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> @@ -201,6 +201,10 @@ int main(int argc, char *argv[])
>
>         TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
>
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE) && thp_configured())
> +               TEST_ASSERT_EQ(get_trans_hugepagesz(),
> +                              kvm_check_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE));
> +
>         page_size = getpagesize();
>         total_size = page_size * 4;
>
> diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> index be311944e90a..245901587ed2 100644
> --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> @@ -396,7 +396,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
>
>         vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
>
> -       if (backing_src_can_be_huge(src_type))
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE))
>                 memfd_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         else
>                 memfd_flags = 0;
>
> --
> From: Sean Christopherson <seanjc@google.com>
> Date: Wed, 25 Oct 2023 16:26:41 -0700
> Subject: [PATCH] KVM: Add best-effort hugepage support for dedicated guest
>  memory
>
> Extend guest_memfd to allow backing guest memory with hugepages.  For now,
> make hugepage utilization best-effort, i.e. fall back to non-huge mappings
> if a hugepage can't be allocated.  Guaranteeing hugepages would require a
> dedicated memory pool and significantly more complexity and churn..
>
> Require userspace to opt-in via a flag even though it's unlikely real use
> cases will ever want to use order-0 pages, e.g. to give userspace a safety
> valve in case hugepage support is buggy, and to allow for easier testing
> of both paths.
>
> Do not take a dependency on CONFIG_TRANSPARENT_HUGEPAGE, as THP enabling
> primarily deals with userspace page tables, which are explicitly not in
> play for guest_memfd.  Selecting THP does make obtaining hugepages more
> likely, but only because THP selects CONFIG_COMPACTION.  Don't select
> CONFIG_COMPACTION either, because again it's not a hard dependency.
>
> For simplicity, require the guest_memfd size to be a multiple of the
> hugepage size, e.g. so that KVM doesn't need to do bounds checking when
> deciding whether or not to allocate a huge folio.
>
> When reporting the max order when KVM gets a pfn from guest_memfd, force
> order-0 pages if the hugepage is not fully contained by the memslot
> binding, e.g. if userspace requested hugepages but punches a hole in the
> memslot bindings in order to emulate x86's VGA hole.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  Documentation/virt/kvm/api.rst | 17 +++++++++
>  include/uapi/linux/kvm.h       |  3 ++
>  virt/kvm/guest_memfd.c         | 69 +++++++++++++++++++++++++++++-----
>  virt/kvm/kvm_main.c            |  4 ++
>  4 files changed, 84 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index e82c69d5e755..ccdd5413920d 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6176,6 +6176,8 @@ and cannot be resized  (guest_memfd files do however support PUNCH_HOLE).
>         __u64 reserved[6];
>    };
>
> +  #define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  Conceptually, the inode backing a guest_memfd file represents physical memory,
>  i.e. is coupled to the virtual machine as a thing, not to a "struct kvm".  The
>  file itself, which is bound to a "struct kvm", is that instance's view of the
> @@ -6192,6 +6194,12 @@ most one mapping per page, i.e. binding multiple memory regions to a single
>  guest_memfd range is not allowed (any number of memory regions can be bound to
>  a single guest_memfd file, but the bound ranges must not overlap).
>
> +If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set in flags, KVM will attempt to allocate
> +and map PMD-size hugepages for the guest_memfd file.  This is currently best
> +effort.  If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set, size must be aligned to at
> +least the size reported by KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE (which also
> +enumerates support for KVM_GUEST_MEMFD_ALLOW_HUGEPAGE).
> +
>  See KVM_SET_USER_MEMORY_REGION2 for additional details.
>
>  5. The kvm_run structure
> @@ -8639,6 +8647,15 @@ block sizes is exposed in KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES as a
>  64-bit bitmap (each bit describing a block size). The default value is
>  0, to disable the eager page splitting.
>
> +
> +8.41 KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE
> +------------------------------------------
> +
> +This is an information-only capability that returns guest_memfd's hugepage size
> +for PMD hugepages.  Returns '0' if guest_memfd is not supported, or if KVM
> +doesn't support creating hugepages for guest_memfd.  Note, guest_memfd doesn't
> +currently support PUD-sized hugepages.
> +
>  9. Known KVM API problems
>  =========================
>
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 25caee8d1a80..b78d0e3bf22a 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1217,6 +1217,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_MEMORY_FAULT_INFO 231
>  #define KVM_CAP_MEMORY_ATTRIBUTES 232
>  #define KVM_CAP_GUEST_MEMFD 233
> +#define KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE 234
>
>  #ifdef KVM_CAP_IRQ_ROUTING
>
> @@ -2303,4 +2304,6 @@ struct kvm_create_guest_memfd {
>         __u64 reserved[6];
>  };
>
> +#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  #endif /* __LINUX_KVM_H */
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 98a12da80214..31b5e94d461a 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -13,14 +13,44 @@ struct kvm_gmem {
>         struct list_head entry;
>  };
>
> +#define NR_PAGES_PER_PMD (1 << PMD_ORDER)
> +
> +static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index)
> +{
> +       unsigned long huge_index = round_down(index, NR_PAGES_PER_PMD);
> +       unsigned long flags = (unsigned long)inode->i_private;
> +       struct address_space *mapping  = inode->i_mapping;
> +       gfp_t gfp = mapping_gfp_mask(mapping);
> +       struct folio *folio;
> +
> +       if (!(flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE))
> +               return NULL;
> +
> +       if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT,
> +                                  (huge_index + NR_PAGES_PER_PMD - 1) << PAGE_SHIFT))
> +               return NULL;
> +
> +       folio = filemap_alloc_folio(gfp, PMD_ORDER);
> +       if (!folio)
> +               return NULL;
> +
> +       if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
> +               folio_put(folio);
> +               return NULL;
> +       }
> +       return folio;
> +}
> +
>  static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
>  {
>         struct folio *folio;
>
> -       /* TODO: Support huge pages. */
> -       folio = filemap_grab_folio(inode->i_mapping, index);
> -       if (IS_ERR_OR_NULL(folio))
> -               return NULL;
> +       folio = kvm_gmem_get_huge_folio(inode, index);
> +       if (!folio) {
> +               folio = filemap_grab_folio(inode->i_mapping, index);
> +               if (IS_ERR_OR_NULL(folio))
> +                       return NULL;
> +       }
>
>         /*
>          * Use the up-to-date flag to track whether or not the memory has been
> @@ -373,6 +403,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>         inode->i_mode |= S_IFREG;
>         inode->i_size = size;
>         mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
> +       mapping_set_large_folios(inode->i_mapping);
>         mapping_set_unmovable(inode->i_mapping);
>         /* Unmovable mappings are supposed to be marked unevictable as well. */
>         WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
> @@ -394,14 +425,18 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>
>  int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
>  {
> +       u64 valid_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         loff_t size = args->size;
>         u64 flags = args->flags;
> -       u64 valid_flags = 0;
>
>         if (flags & ~valid_flags)
>                 return -EINVAL;
>
> -       if (size < 0 || !PAGE_ALIGNED(size))
> +       if (size <= 0 || !PAGE_ALIGNED(size))
> +               return -EINVAL;
> +
> +       if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) &&
> +           !IS_ALIGNED(size, PMD_SIZE))
>                 return -EINVAL;
>
>         return __kvm_gmem_create(kvm, size, flags);
> @@ -501,7 +536,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot)
>  int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>                      gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
>  {
> -       pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
> +       pgoff_t index, huge_index;
>         struct kvm_gmem *gmem;
>         struct folio *folio;
>         struct page *page;
> @@ -514,6 +549,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>
>         gmem = file->private_data;
>
> +       index = gfn - slot->base_gfn + slot->gmem.pgoff;
>         if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) {
>                 r = -EIO;
>                 goto out_fput;
> @@ -533,9 +569,24 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>         page = folio_file_page(folio, index);
>
>         *pfn = page_to_pfn(page);
> -       if (max_order)
> +       if (!max_order)
> +               goto success;
> +
> +       *max_order = compound_order(compound_head(page));
> +       if (!*max_order)
> +               goto success;
> +
> +       /*
> +        * The folio can be mapped with a hugepage if and only if the folio is
> +        * fully contained by the range the memslot is bound to.  Note, the
> +        * caller is responsible for handling gfn alignment, this only deals
> +        * with the file binding.
> +        */
> +       huge_index = ALIGN(index, 1ull << *max_order);
> +       if (huge_index < ALIGN(slot->gmem.pgoff, 1ull << *max_order) ||
> +           huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
>                 *max_order = 0;
> -
> +success:
>         r = 0;
>
>  out_unlock:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5d1a2f1b4e94..0711f2c75667 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4888,6 +4888,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  #ifdef CONFIG_KVM_PRIVATE_MEM
>         case KVM_CAP_GUEST_MEMFD:
>                 return !kvm || kvm_arch_has_private_mem(kvm);
> +       case KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE:
> +               if (kvm && !kvm_arch_has_private_mem(kvm))
> +                       return 0;
> +               return PMD_SIZE;
>  #endif
>         default:
>                 break;
>
> base-commit: fcbef1e5e5d2a60dacac0d16c06ac00bedaefc0f
> --
>


WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: "Xiaoyao Li" <xiaoyao.li@intel.com>,
	"Marc Zyngier" <maz@kernel.org>,
	"Oliver Upton" <oliver.upton@linux.dev>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Michael Ellerman" <mpe@ellerman.id.au>,
	"Anup Patel" <anup@brainfault.org>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Albert Ou" <aou@eecs.berkeley.edu>,
	"Alexander Viro" <viro@zeniv.linux.org.uk>,
	"Christian Brauner" <brauner@kernel.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	"Xu Yilun" <yilun.xu@intel.com>,
	"Chao Peng" <chao.p.peng@linux.intel.com>,
	"Fuad Tabba" <tabba@google.com>,
	"Jarkko Sakkinen" <jarkko@kernel.org>,
	"Anish Moorthy" <amoorthy@google.com>,
	"David Matlack" <dmatlack@google.com>,
	"Yu Zhang" <yu.c.zhang@linux.intel.com>,
	"Isaku Yamahata" <isaku.yamahata@intel.com>,
	"Mickaël Salaün" <mic@digikod.net>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Vishal Annapurve" <vannapurve@google.com>,
	"Ackerley Tng" <ackerleytng@google.com>,
	"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
	"David Hildenbrand" <david@redhat.com>,
	"Quentin Perret" <qperret@google.com>,
	"Michael Roth" <michael.roth@amd.com>,
	Wang <wei.w.wang@intel.com>,
	"Liam Merwick" <liam.merwick@oracle.com>,
	"Isaku Yamahata" <isaku.yamahata@gmail.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH v13 17/35] KVM: Add transparent hugepage support for dedicated guest memory
Date: Thu, 2 Nov 2023 16:46:42 +0100	[thread overview]
Message-ID: <CABgObfa=DH7FySBviF63OS9sVog_wt-AqYgtUAGKqnY5Bizivw@mail.gmail.com> (raw)
In-Reply-To: <ZUPCWfO1iO77-KDA@google.com>

On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson <seanjc@google.com> wrote:
> Actually, looking that this again, there's not actually a hard dependency on THP.
> A THP-enabled kernel _probably_  gives a higher probability of using hugepages,
> but mostly because THP selects COMPACTION, and I suppose because using THP for
> other allocations reduces overall fragmentation.

Yes, that's why I didn't even bother enabling it unless THP is
enabled, but it makes even more sense to just try.

> So rather than honor KVM_GUEST_MEMFD_ALLOW_HUGEPAGE iff THP is enabled, I think
> we should do the below (I verified KVM can create hugepages with THP=n).  We'll
> need another capability, but (a) we probably should have that anyways and (b) it
> provides a cleaner path to adding PUD-sized hugepage support in the future.

I wonder if we need KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE though. This
should be a generic kernel API and in fact the sizes are available in
a not-so-friendly format in /sys/kernel/mm/hugepages.

We should just add /sys/kernel/mm/hugepages/sizes that contains
"2097152 1073741824" on x86 (only the former if 1G pages are not
supported).

Plus: is this the best API if we need something else for 1G pages?

Let's drop *this* patch and proceed incrementally. (Again, this is
what I want to do with this final review: identify places that are
stil sticky, and don't let them block the rest).

Coincidentially we have an open spot next week at plumbers. Let's
extend Fuad's section to cover more guestmem work.

Paolo

> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> index c15de9852316..c9f449718fce 100644
> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> @@ -201,6 +201,10 @@ int main(int argc, char *argv[])
>
>         TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
>
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE) && thp_configured())
> +               TEST_ASSERT_EQ(get_trans_hugepagesz(),
> +                              kvm_check_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE));
> +
>         page_size = getpagesize();
>         total_size = page_size * 4;
>
> diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> index be311944e90a..245901587ed2 100644
> --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
> @@ -396,7 +396,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
>
>         vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
>
> -       if (backing_src_can_be_huge(src_type))
> +       if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE))
>                 memfd_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         else
>                 memfd_flags = 0;
>
> --
> From: Sean Christopherson <seanjc@google.com>
> Date: Wed, 25 Oct 2023 16:26:41 -0700
> Subject: [PATCH] KVM: Add best-effort hugepage support for dedicated guest
>  memory
>
> Extend guest_memfd to allow backing guest memory with hugepages.  For now,
> make hugepage utilization best-effort, i.e. fall back to non-huge mappings
> if a hugepage can't be allocated.  Guaranteeing hugepages would require a
> dedicated memory pool and significantly more complexity and churn..
>
> Require userspace to opt-in via a flag even though it's unlikely real use
> cases will ever want to use order-0 pages, e.g. to give userspace a safety
> valve in case hugepage support is buggy, and to allow for easier testing
> of both paths.
>
> Do not take a dependency on CONFIG_TRANSPARENT_HUGEPAGE, as THP enabling
> primarily deals with userspace page tables, which are explicitly not in
> play for guest_memfd.  Selecting THP does make obtaining hugepages more
> likely, but only because THP selects CONFIG_COMPACTION.  Don't select
> CONFIG_COMPACTION either, because again it's not a hard dependency.
>
> For simplicity, require the guest_memfd size to be a multiple of the
> hugepage size, e.g. so that KVM doesn't need to do bounds checking when
> deciding whether or not to allocate a huge folio.
>
> When reporting the max order when KVM gets a pfn from guest_memfd, force
> order-0 pages if the hugepage is not fully contained by the memslot
> binding, e.g. if userspace requested hugepages but punches a hole in the
> memslot bindings in order to emulate x86's VGA hole.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  Documentation/virt/kvm/api.rst | 17 +++++++++
>  include/uapi/linux/kvm.h       |  3 ++
>  virt/kvm/guest_memfd.c         | 69 +++++++++++++++++++++++++++++-----
>  virt/kvm/kvm_main.c            |  4 ++
>  4 files changed, 84 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index e82c69d5e755..ccdd5413920d 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6176,6 +6176,8 @@ and cannot be resized  (guest_memfd files do however support PUNCH_HOLE).
>         __u64 reserved[6];
>    };
>
> +  #define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  Conceptually, the inode backing a guest_memfd file represents physical memory,
>  i.e. is coupled to the virtual machine as a thing, not to a "struct kvm".  The
>  file itself, which is bound to a "struct kvm", is that instance's view of the
> @@ -6192,6 +6194,12 @@ most one mapping per page, i.e. binding multiple memory regions to a single
>  guest_memfd range is not allowed (any number of memory regions can be bound to
>  a single guest_memfd file, but the bound ranges must not overlap).
>
> +If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set in flags, KVM will attempt to allocate
> +and map PMD-size hugepages for the guest_memfd file.  This is currently best
> +effort.  If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set, size must be aligned to at
> +least the size reported by KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE (which also
> +enumerates support for KVM_GUEST_MEMFD_ALLOW_HUGEPAGE).
> +
>  See KVM_SET_USER_MEMORY_REGION2 for additional details.
>
>  5. The kvm_run structure
> @@ -8639,6 +8647,15 @@ block sizes is exposed in KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES as a
>  64-bit bitmap (each bit describing a block size). The default value is
>  0, to disable the eager page splitting.
>
> +
> +8.41 KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE
> +------------------------------------------
> +
> +This is an information-only capability that returns guest_memfd's hugepage size
> +for PMD hugepages.  Returns '0' if guest_memfd is not supported, or if KVM
> +doesn't support creating hugepages for guest_memfd.  Note, guest_memfd doesn't
> +currently support PUD-sized hugepages.
> +
>  9. Known KVM API problems
>  =========================
>
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 25caee8d1a80..b78d0e3bf22a 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1217,6 +1217,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_MEMORY_FAULT_INFO 231
>  #define KVM_CAP_MEMORY_ATTRIBUTES 232
>  #define KVM_CAP_GUEST_MEMFD 233
> +#define KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE 234
>
>  #ifdef KVM_CAP_IRQ_ROUTING
>
> @@ -2303,4 +2304,6 @@ struct kvm_create_guest_memfd {
>         __u64 reserved[6];
>  };
>
> +#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE         (1ULL << 0)
> +
>  #endif /* __LINUX_KVM_H */
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 98a12da80214..31b5e94d461a 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -13,14 +13,44 @@ struct kvm_gmem {
>         struct list_head entry;
>  };
>
> +#define NR_PAGES_PER_PMD (1 << PMD_ORDER)
> +
> +static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index)
> +{
> +       unsigned long huge_index = round_down(index, NR_PAGES_PER_PMD);
> +       unsigned long flags = (unsigned long)inode->i_private;
> +       struct address_space *mapping  = inode->i_mapping;
> +       gfp_t gfp = mapping_gfp_mask(mapping);
> +       struct folio *folio;
> +
> +       if (!(flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE))
> +               return NULL;
> +
> +       if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT,
> +                                  (huge_index + NR_PAGES_PER_PMD - 1) << PAGE_SHIFT))
> +               return NULL;
> +
> +       folio = filemap_alloc_folio(gfp, PMD_ORDER);
> +       if (!folio)
> +               return NULL;
> +
> +       if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
> +               folio_put(folio);
> +               return NULL;
> +       }
> +       return folio;
> +}
> +
>  static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
>  {
>         struct folio *folio;
>
> -       /* TODO: Support huge pages. */
> -       folio = filemap_grab_folio(inode->i_mapping, index);
> -       if (IS_ERR_OR_NULL(folio))
> -               return NULL;
> +       folio = kvm_gmem_get_huge_folio(inode, index);
> +       if (!folio) {
> +               folio = filemap_grab_folio(inode->i_mapping, index);
> +               if (IS_ERR_OR_NULL(folio))
> +                       return NULL;
> +       }
>
>         /*
>          * Use the up-to-date flag to track whether or not the memory has been
> @@ -373,6 +403,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>         inode->i_mode |= S_IFREG;
>         inode->i_size = size;
>         mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
> +       mapping_set_large_folios(inode->i_mapping);
>         mapping_set_unmovable(inode->i_mapping);
>         /* Unmovable mappings are supposed to be marked unevictable as well. */
>         WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
> @@ -394,14 +425,18 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>
>  int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
>  {
> +       u64 valid_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
>         loff_t size = args->size;
>         u64 flags = args->flags;
> -       u64 valid_flags = 0;
>
>         if (flags & ~valid_flags)
>                 return -EINVAL;
>
> -       if (size < 0 || !PAGE_ALIGNED(size))
> +       if (size <= 0 || !PAGE_ALIGNED(size))
> +               return -EINVAL;
> +
> +       if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) &&
> +           !IS_ALIGNED(size, PMD_SIZE))
>                 return -EINVAL;
>
>         return __kvm_gmem_create(kvm, size, flags);
> @@ -501,7 +536,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot)
>  int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>                      gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
>  {
> -       pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
> +       pgoff_t index, huge_index;
>         struct kvm_gmem *gmem;
>         struct folio *folio;
>         struct page *page;
> @@ -514,6 +549,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>
>         gmem = file->private_data;
>
> +       index = gfn - slot->base_gfn + slot->gmem.pgoff;
>         if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) {
>                 r = -EIO;
>                 goto out_fput;
> @@ -533,9 +569,24 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>         page = folio_file_page(folio, index);
>
>         *pfn = page_to_pfn(page);
> -       if (max_order)
> +       if (!max_order)
> +               goto success;
> +
> +       *max_order = compound_order(compound_head(page));
> +       if (!*max_order)
> +               goto success;
> +
> +       /*
> +        * The folio can be mapped with a hugepage if and only if the folio is
> +        * fully contained by the range the memslot is bound to.  Note, the
> +        * caller is responsible for handling gfn alignment, this only deals
> +        * with the file binding.
> +        */
> +       huge_index = ALIGN(index, 1ull << *max_order);
> +       if (huge_index < ALIGN(slot->gmem.pgoff, 1ull << *max_order) ||
> +           huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
>                 *max_order = 0;
> -
> +success:
>         r = 0;
>
>  out_unlock:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5d1a2f1b4e94..0711f2c75667 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4888,6 +4888,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  #ifdef CONFIG_KVM_PRIVATE_MEM
>         case KVM_CAP_GUEST_MEMFD:
>                 return !kvm || kvm_arch_has_private_mem(kvm);
> +       case KVM_CAP_GUEST_MEMFD_HUGEPAGE_PMD_SIZE:
> +               if (kvm && !kvm_arch_has_private_mem(kvm))
> +                       return 0;
> +               return PMD_SIZE;
>  #endif
>         default:
>                 break;
>
> base-commit: fcbef1e5e5d2a60dacac0d16c06ac00bedaefc0f
> --
>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-11-02 15:47 UTC|newest]

Thread overview: 583+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-27 18:21 [PATCH v13 00/35] KVM: guest_memfd() and per-page attributes Sean Christopherson
2023-10-27 18:21 ` Sean Christopherson
2023-10-27 18:21 ` Sean Christopherson
2023-10-27 18:21 ` Sean Christopherson
2023-10-27 18:21 ` [PATCH v13 01/35] KVM: Tweak kvm_hva_range and hva_handler_t to allow reusing for gfn ranges Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-11-01 12:46   ` Fuad Tabba
2023-11-01 12:46     ` Fuad Tabba
2023-11-01 12:46     ` Fuad Tabba
2023-11-01 12:46     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 02/35] KVM: Assert that mmu_invalidate_in_progress *never* goes negative Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 16:27   ` Paolo Bonzini
2023-10-30 16:27     ` Paolo Bonzini
2023-10-30 16:27     ` Paolo Bonzini
2023-10-30 16:27     ` Paolo Bonzini
2023-11-01 12:46   ` Fuad Tabba
2023-11-01 12:46     ` Fuad Tabba
2023-11-01 12:46     ` Fuad Tabba
2023-11-01 12:46     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 03/35] KVM: Use gfn instead of hva for mmu_notifier_retry Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 16:30   ` Paolo Bonzini
2023-10-30 16:30     ` Paolo Bonzini
2023-10-30 16:30     ` Paolo Bonzini
2023-10-30 16:30     ` Paolo Bonzini
2023-10-30 16:53   ` David Matlack
2023-10-30 16:53     ` David Matlack
2023-10-30 16:53     ` David Matlack
2023-10-30 16:53     ` David Matlack
2023-10-30 17:00     ` Paolo Bonzini
2023-10-30 17:00       ` Paolo Bonzini
2023-10-30 17:00       ` Paolo Bonzini
2023-10-30 17:00       ` Paolo Bonzini
2023-10-30 18:21       ` David Matlack
2023-10-30 18:21         ` David Matlack
2023-10-30 18:21         ` David Matlack
2023-10-30 18:21         ` David Matlack
2023-10-30 18:19     ` David Matlack
2023-10-30 18:19       ` David Matlack
2023-10-30 18:19       ` David Matlack
2023-10-30 18:19       ` David Matlack
2023-11-01 15:31   ` Xu Yilun
2023-11-01 15:31     ` Xu Yilun
2023-11-01 15:31     ` Xu Yilun
2023-11-01 15:31     ` Xu Yilun
2023-10-27 18:21 ` [PATCH v13 04/35] KVM: WARN if there are dangling MMU invalidations at VM destruction Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 16:32   ` Paolo Bonzini
2023-10-30 16:32     ` Paolo Bonzini
2023-10-30 16:32     ` Paolo Bonzini
2023-10-30 16:32     ` Paolo Bonzini
2023-11-01 12:50   ` Fuad Tabba
2023-11-01 12:50     ` Fuad Tabba
2023-11-01 12:50     ` Fuad Tabba
2023-11-01 12:50     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 05/35] KVM: PPC: Drop dead code related to KVM_ARCH_WANT_MMU_NOTIFIER Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 16:34   ` Paolo Bonzini
2023-10-30 16:34     ` Paolo Bonzini
2023-10-30 16:34     ` Paolo Bonzini
2023-10-30 16:34     ` Paolo Bonzini
2023-11-01 12:51   ` Fuad Tabba
2023-11-01 12:51     ` Fuad Tabba
2023-11-01 12:51     ` Fuad Tabba
2023-11-01 12:51     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 06/35] KVM: PPC: Return '1' unconditionally for KVM_CAP_SYNC_MMU Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21 ` [PATCH v13 07/35] KVM: Convert KVM_ARCH_WANT_MMU_NOTIFIER to CONFIG_KVM_GENERIC_MMU_NOTIFIER Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 16:37   ` Paolo Bonzini
2023-10-30 16:37     ` Paolo Bonzini
2023-10-30 16:37     ` Paolo Bonzini
2023-10-30 16:37     ` Paolo Bonzini
2023-11-01 12:54   ` Fuad Tabba
2023-11-01 12:54     ` Fuad Tabba
2023-11-01 12:54     ` Fuad Tabba
2023-11-01 12:54     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 08/35] KVM: Introduce KVM_SET_USER_MEMORY_REGION2 Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 16:41   ` Paolo Bonzini
2023-10-30 16:41     ` Paolo Bonzini
2023-10-30 16:41     ` Paolo Bonzini
2023-10-30 16:41     ` Paolo Bonzini
2023-10-30 20:25     ` Sean Christopherson
2023-10-30 20:25       ` Sean Christopherson
2023-10-30 20:25       ` Sean Christopherson
2023-10-30 20:25       ` Sean Christopherson
2023-10-30 22:12       ` Sean Christopherson
2023-10-30 22:12         ` Sean Christopherson
2023-10-30 22:12         ` Sean Christopherson
2023-10-30 22:12         ` Sean Christopherson
2023-10-30 23:22       ` Paolo Bonzini
2023-10-30 23:22         ` Paolo Bonzini
2023-10-30 23:22         ` Paolo Bonzini
2023-10-30 23:22         ` Paolo Bonzini
2023-10-31  0:18         ` Sean Christopherson
2023-10-31  0:18           ` Sean Christopherson
2023-10-31  0:18           ` Sean Christopherson
2023-10-31  0:18           ` Sean Christopherson
2023-10-31  2:26   ` Xiaoyao Li
2023-10-31  2:26     ` Xiaoyao Li
2023-10-31  2:26     ` Xiaoyao Li
2023-10-31  2:26     ` Xiaoyao Li
2023-10-31 14:04     ` Sean Christopherson
2023-10-31 14:04       ` Sean Christopherson
2023-10-31 14:04       ` Sean Christopherson
2023-10-31 14:04       ` Sean Christopherson
2023-11-01 14:19   ` Fuad Tabba
2023-11-01 14:19     ` Fuad Tabba
2023-11-01 14:19     ` Fuad Tabba
2023-11-01 14:19     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 09/35] KVM: Add KVM_EXIT_MEMORY_FAULT exit to report faults to userspace Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 17:22   ` Paolo Bonzini
2023-10-30 17:22     ` Paolo Bonzini
2023-10-30 17:22     ` Paolo Bonzini
2023-10-30 17:22     ` Paolo Bonzini
2023-11-01  7:30   ` Binbin Wu
2023-11-01  7:30     ` Binbin Wu
2023-11-01  7:30     ` Binbin Wu
2023-11-01  7:30     ` Binbin Wu
2023-11-01 10:52   ` Huang, Kai
2023-11-01 10:52     ` Huang, Kai
2023-11-01 10:52     ` Huang, Kai
2023-11-01 10:52     ` Huang, Kai
2023-11-01 17:36     ` Sean Christopherson
2023-11-01 17:36       ` Sean Christopherson
2023-11-01 17:36       ` Sean Christopherson
2023-11-01 17:36       ` Sean Christopherson
2023-11-02  2:19       ` Xiaoyao Li
2023-11-02  2:19         ` Xiaoyao Li
2023-11-02  2:19         ` Xiaoyao Li
2023-11-02  2:19         ` Xiaoyao Li
2023-11-02 15:51         ` Sean Christopherson
2023-11-02 15:51           ` Sean Christopherson
2023-11-02 15:51           ` Sean Christopherson
2023-11-02 15:51           ` Sean Christopherson
2023-11-02  3:17       ` Huang, Kai
2023-11-02  3:17         ` Huang, Kai
2023-11-02  3:17         ` Huang, Kai
2023-11-02  3:17         ` Huang, Kai
2023-11-02  9:35         ` Huang, Kai
2023-11-02  9:35           ` Huang, Kai
2023-11-02  9:35           ` Huang, Kai
2023-11-02  9:35           ` Huang, Kai
2023-11-02 11:03           ` Paolo Bonzini
2023-11-02 11:03             ` Paolo Bonzini
2023-11-02 11:03             ` Paolo Bonzini
2023-11-02 11:03             ` Paolo Bonzini
2023-11-02 15:44             ` Sean Christopherson
2023-11-02 15:44               ` Sean Christopherson
2023-11-02 15:44               ` Sean Christopherson
2023-11-02 15:44               ` Sean Christopherson
2023-11-02 18:35               ` Huang, Kai
2023-11-02 18:35                 ` Huang, Kai
2023-11-02 18:35                 ` Huang, Kai
2023-11-02 18:35                 ` Huang, Kai
2023-11-02 15:56         ` Sean Christopherson
2023-11-02 15:56           ` Sean Christopherson
2023-11-02 15:56           ` Sean Christopherson
2023-11-02 15:56           ` Sean Christopherson
2023-11-02 11:01       ` Paolo Bonzini
2023-11-02 11:01         ` Paolo Bonzini
2023-11-02 11:01         ` Paolo Bonzini
2023-11-02 11:01         ` Paolo Bonzini
2023-11-03  4:09   ` Xu Yilun
2023-11-03  4:09     ` Xu Yilun
2023-11-03  4:09     ` Xu Yilun
2023-11-03  4:09     ` Xu Yilun
2023-10-27 18:21 ` [PATCH v13 10/35] KVM: Add a dedicated mmu_notifier flag for reclaiming freed memory Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 17:11   ` Paolo Bonzini
2023-10-30 17:11     ` Paolo Bonzini
2023-10-30 17:11     ` Paolo Bonzini
2023-10-30 17:11     ` Paolo Bonzini
2023-11-02 13:55   ` Fuad Tabba
2023-11-02 13:55     ` Fuad Tabba
2023-11-02 13:55     ` Fuad Tabba
2023-11-02 13:55     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 11/35] KVM: Drop .on_unlock() mmu_notifier hook Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 17:18   ` Paolo Bonzini
2023-10-30 17:18     ` Paolo Bonzini
2023-10-30 17:18     ` Paolo Bonzini
2023-10-30 17:18     ` Paolo Bonzini
2023-11-02 13:55   ` Fuad Tabba
2023-11-02 13:55     ` Fuad Tabba
2023-11-02 13:55     ` Fuad Tabba
2023-11-02 13:55     ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 12/35] KVM: Prepare for handling only shared mappings in mmu_notifier events Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 17:21   ` Paolo Bonzini
2023-10-30 17:21     ` Paolo Bonzini
2023-10-30 17:21     ` Paolo Bonzini
2023-10-30 17:21     ` Paolo Bonzini
2023-10-30 22:07     ` Sean Christopherson
2023-10-30 22:07       ` Sean Christopherson
2023-10-30 22:07       ` Sean Christopherson
2023-10-30 22:07       ` Sean Christopherson
2023-11-02  5:59   ` Binbin Wu
2023-11-02  5:59     ` Binbin Wu
2023-11-02  5:59     ` Binbin Wu
2023-11-02  5:59     ` Binbin Wu
2023-11-02 11:14     ` Paolo Bonzini
2023-11-02 11:14       ` Paolo Bonzini
2023-11-02 11:14       ` Paolo Bonzini
2023-11-02 11:14       ` Paolo Bonzini
2023-11-02 14:01   ` Fuad Tabba
2023-11-02 14:01     ` Fuad Tabba
2023-11-02 14:01     ` Fuad Tabba
2023-11-02 14:01     ` Fuad Tabba
2023-11-02 14:41     ` Sean Christopherson
2023-11-02 14:41       ` Sean Christopherson
2023-11-02 14:41       ` Sean Christopherson
2023-11-02 14:41       ` Sean Christopherson
2023-11-02 14:57       ` Fuad Tabba
2023-11-02 14:57         ` Fuad Tabba
2023-11-02 14:57         ` Fuad Tabba
2023-11-02 14:57         ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 13/35] KVM: Introduce per-page memory attributes Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30  8:11   ` Chao Gao
2023-10-30  8:11     ` Chao Gao
2023-10-30  8:11     ` Chao Gao
2023-10-30  8:11     ` Chao Gao
2023-10-30 16:10     ` Sean Christopherson
2023-10-30 16:10       ` Sean Christopherson
2023-10-30 16:10       ` Sean Christopherson
2023-10-30 16:10       ` Sean Christopherson
2023-10-30 22:05       ` Sean Christopherson
2023-10-30 22:05         ` Sean Christopherson
2023-10-30 22:05         ` Sean Christopherson
2023-10-30 22:05         ` Sean Christopherson
2023-10-31 16:43   ` David Matlack
2023-10-31 16:43     ` David Matlack
2023-10-31 16:43     ` David Matlack
2023-10-31 16:43     ` David Matlack
2023-11-02  3:01   ` Huang, Kai
2023-11-02  3:01     ` Huang, Kai
2023-11-02  3:01     ` Huang, Kai
2023-11-02  3:01     ` Huang, Kai
2023-11-02 10:32     ` Paolo Bonzini
2023-11-02 10:32       ` Paolo Bonzini
2023-11-02 10:32       ` Paolo Bonzini
2023-11-02 10:32       ` Paolo Bonzini
2023-11-02 10:55       ` Huang, Kai
2023-11-02 10:55         ` Huang, Kai
2023-11-02 10:55         ` Huang, Kai
2023-11-02 10:55         ` Huang, Kai
2023-10-27 18:21 ` [PATCH v13 14/35] mm: Add AS_UNMOVABLE to mark mapping as completely unmovable Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 17:24   ` Paolo Bonzini
2023-10-30 17:24     ` Paolo Bonzini
2023-10-30 17:24     ` Paolo Bonzini
2023-10-30 17:24     ` Paolo Bonzini
2023-10-27 18:21 ` [PATCH v13 15/35] fs: Export anon_inode_getfile_secure() for use by KVM Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-30 17:30   ` Paolo Bonzini
2023-10-30 17:30     ` Paolo Bonzini
2023-10-30 17:30     ` Paolo Bonzini
2023-10-30 17:30     ` Paolo Bonzini
2023-11-02 16:24   ` Christian Brauner
2023-11-02 16:24     ` Christian Brauner
2023-11-02 16:24     ` Christian Brauner
2023-11-02 16:24     ` Christian Brauner
2023-11-03 10:40     ` Paolo Bonzini
2023-11-03 10:40       ` Paolo Bonzini
2023-11-03 10:40       ` Paolo Bonzini
2023-11-03 10:40       ` Paolo Bonzini
2023-10-27 18:21 ` [PATCH v13 16/35] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-31  2:27   ` Xiaoyao Li
2023-10-31  2:27     ` Xiaoyao Li
2023-10-31  2:27     ` Xiaoyao Li
2023-10-31  2:27     ` Xiaoyao Li
2023-10-31  6:30   ` Chao Gao
2023-10-31  6:30     ` Chao Gao
2023-10-31  6:30     ` Chao Gao
2023-10-31  6:30     ` Chao Gao
2023-10-31 14:10     ` Sean Christopherson
2023-10-31 14:10       ` Sean Christopherson
2023-10-31 14:10       ` Sean Christopherson
2023-10-31 14:10       ` Sean Christopherson
2023-10-31 15:05   ` Fuad Tabba
2023-10-31 15:05     ` Fuad Tabba
2023-10-31 15:05     ` Fuad Tabba
2023-10-31 15:05     ` Fuad Tabba
2023-10-31 22:13     ` Sean Christopherson
2023-10-31 22:13       ` Sean Christopherson
2023-10-31 22:13       ` Sean Christopherson
2023-10-31 22:13       ` Sean Christopherson
2023-10-31 22:18       ` Paolo Bonzini
2023-10-31 22:18         ` Paolo Bonzini
2023-10-31 22:18         ` Paolo Bonzini
2023-10-31 22:18         ` Paolo Bonzini
2023-11-01 10:51       ` Fuad Tabba
2023-11-01 10:51         ` Fuad Tabba
2023-11-01 10:51         ` Fuad Tabba
2023-11-01 10:51         ` Fuad Tabba
2023-11-01 21:55         ` Sean Christopherson
2023-11-01 21:55           ` Sean Christopherson
2023-11-01 21:55           ` Sean Christopherson
2023-11-01 21:55           ` Sean Christopherson
2023-11-02 13:52           ` Fuad Tabba
2023-11-02 13:52             ` Fuad Tabba
2023-11-02 13:52             ` Fuad Tabba
2023-11-02 13:52             ` Fuad Tabba
2023-11-03 23:17             ` Sean Christopherson
2023-11-03 23:17               ` Sean Christopherson
2023-11-03 23:17               ` Sean Christopherson
2023-11-03 23:17               ` Sean Christopherson
2023-10-31 18:24   ` David Matlack
2023-10-31 18:24     ` David Matlack
2023-10-31 18:24     ` David Matlack
2023-10-31 18:24     ` David Matlack
2023-10-31 21:36     ` Sean Christopherson
2023-10-31 21:36       ` Sean Christopherson
2023-10-31 21:36       ` Sean Christopherson
2023-10-31 21:36       ` Sean Christopherson
2023-10-31 22:39       ` David Matlack
2023-10-31 22:39         ` David Matlack
2023-10-31 22:39         ` David Matlack
2023-10-31 22:39         ` David Matlack
2023-11-02 15:48         ` Paolo Bonzini
2023-11-02 15:48           ` Paolo Bonzini
2023-11-02 15:48           ` Paolo Bonzini
2023-11-02 15:48           ` Paolo Bonzini
2023-11-02 16:03           ` Sean Christopherson
2023-11-02 16:03             ` Sean Christopherson
2023-11-02 16:03             ` Sean Christopherson
2023-11-02 16:03             ` Sean Christopherson
2023-11-02 16:28             ` David Matlack
2023-11-02 16:28               ` David Matlack
2023-11-02 16:28               ` David Matlack
2023-11-02 16:28               ` David Matlack
2023-11-02 17:37               ` Sean Christopherson
2023-11-02 17:37                 ` Sean Christopherson
2023-11-02 17:37                 ` Sean Christopherson
2023-11-02 17:37                 ` Sean Christopherson
2023-11-03  9:42   ` Fuad Tabba
2023-11-03  9:42     ` Fuad Tabba
2023-11-03  9:42     ` Fuad Tabba
2023-11-03  9:42     ` Fuad Tabba
2023-11-04 10:26   ` Xu Yilun
2023-11-04 10:26     ` Xu Yilun
2023-11-04 10:26     ` Xu Yilun
2023-11-04 10:26     ` Xu Yilun
2023-11-06 15:43     ` Sean Christopherson
2023-11-06 15:43       ` Sean Christopherson
2023-11-06 15:43       ` Sean Christopherson
2023-11-06 15:43       ` Sean Christopherson
2023-10-27 18:21 ` [PATCH v13 17/35] KVM: Add transparent hugepage support for dedicated guest memory Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-27 18:21   ` Sean Christopherson
2023-10-31  8:35   ` Xiaoyao Li
2023-10-31  8:35     ` Xiaoyao Li
2023-10-31  8:35     ` Xiaoyao Li
2023-10-31  8:35     ` Xiaoyao Li
2023-10-31 14:16     ` Sean Christopherson
2023-10-31 14:16       ` Sean Christopherson
2023-10-31 14:16       ` Sean Christopherson
2023-10-31 14:16       ` Sean Christopherson
2023-11-01  7:25       ` Xiaoyao Li
2023-11-01  7:25         ` Xiaoyao Li
2023-11-01  7:25         ` Xiaoyao Li
2023-11-01  7:25         ` Xiaoyao Li
2023-11-01 13:41         ` Sean Christopherson
2023-11-01 13:41           ` Sean Christopherson
2023-11-01 13:41           ` Sean Christopherson
2023-11-01 13:41           ` Sean Christopherson
2023-11-01 13:49           ` Paolo Bonzini
2023-11-01 13:49             ` Paolo Bonzini
2023-11-01 13:49             ` Paolo Bonzini
2023-11-01 13:49             ` Paolo Bonzini
2023-11-01 16:36             ` Sean Christopherson
2023-11-01 16:36               ` Sean Christopherson
2023-11-01 16:36               ` Sean Christopherson
2023-11-01 16:36               ` Sean Christopherson
2023-11-01 22:28               ` Paolo Bonzini
2023-11-01 22:28                 ` Paolo Bonzini
2023-11-01 22:28                 ` Paolo Bonzini
2023-11-01 22:28                 ` Paolo Bonzini
2023-11-01 22:34                 ` Sean Christopherson
2023-11-01 22:34                   ` Sean Christopherson
2023-11-01 22:34                   ` Sean Christopherson
2023-11-01 22:34                   ` Sean Christopherson
2023-11-01 23:17                   ` Paolo Bonzini
2023-11-01 23:17                     ` Paolo Bonzini
2023-11-01 23:17                     ` Paolo Bonzini
2023-11-01 23:17                     ` Paolo Bonzini
2023-11-02 15:38                     ` Sean Christopherson
2023-11-02 15:38                       ` Sean Christopherson
2023-11-02 15:38                       ` Sean Christopherson
2023-11-02 15:38                       ` Sean Christopherson
2023-11-02 15:46                       ` Paolo Bonzini [this message]
2023-11-02 15:46                         ` Paolo Bonzini
2023-11-02 15:46                         ` Paolo Bonzini
2023-11-02 15:46                         ` Paolo Bonzini
2023-11-27 11:13                         ` Vlastimil Babka
2023-11-27 11:13                           ` Vlastimil Babka
2023-11-27 11:13                           ` Vlastimil Babka
2023-11-29 22:40                           ` Sean Christopherson
2023-11-29 22:40                             ` Sean Christopherson
2023-11-29 22:40                             ` Sean Christopherson
2023-11-29 22:40                             ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 18/35] KVM: x86: "Reset" vcpu->run->exit_reason early in KVM_RUN Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-30 17:31   ` Paolo Bonzini
2023-10-30 17:31     ` Paolo Bonzini
2023-10-30 17:31     ` Paolo Bonzini
2023-10-30 17:31     ` Paolo Bonzini
2023-11-02 14:16   ` Fuad Tabba
2023-11-02 14:16     ` Fuad Tabba
2023-11-02 14:16     ` Fuad Tabba
2023-11-02 14:16     ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 19/35] KVM: x86: Disallow hugepages when memory attributes are mixed Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 20/35] KVM: x86/mmu: Handle page fault for private memory Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-11-02 14:34   ` Fuad Tabba
2023-11-02 14:34     ` Fuad Tabba
2023-11-02 14:34     ` Fuad Tabba
2023-11-02 14:34     ` Fuad Tabba
2023-11-05 13:02   ` Xu Yilun
2023-11-05 13:02     ` Xu Yilun
2023-11-05 13:02     ` Xu Yilun
2023-11-05 13:02     ` Xu Yilun
2023-11-05 16:19     ` Paolo Bonzini
2023-11-05 16:19       ` Paolo Bonzini
2023-11-05 16:19       ` Paolo Bonzini
2023-11-05 16:19       ` Paolo Bonzini
2023-11-06 13:29       ` Xu Yilun
2023-11-06 13:29         ` Xu Yilun
2023-11-06 13:29         ` Xu Yilun
2023-11-06 13:29         ` Xu Yilun
2023-11-06 15:56         ` Sean Christopherson
2023-11-06 15:56           ` Sean Christopherson
2023-11-06 15:56           ` Sean Christopherson
2023-11-06 15:56           ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 21/35] KVM: Drop superfluous __KVM_VCPU_MULTIPLE_ADDRESS_SPACE macro Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-11-02 14:35   ` Fuad Tabba
2023-11-02 14:35     ` Fuad Tabba
2023-11-02 14:35     ` Fuad Tabba
2023-11-02 14:35     ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 22/35] KVM: Allow arch code to track number of memslot address spaces per VM Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-30 17:34   ` Paolo Bonzini
2023-10-30 17:34     ` Paolo Bonzini
2023-10-30 17:34     ` Paolo Bonzini
2023-10-30 17:34     ` Paolo Bonzini
2023-11-02 14:52   ` Fuad Tabba
2023-11-02 14:52     ` Fuad Tabba
2023-11-02 14:52     ` Fuad Tabba
2023-11-02 14:52     ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 23/35] KVM: x86: Add support for "protected VMs" that can utilize private memory Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-30 17:36   ` Paolo Bonzini
2023-10-30 17:36     ` Paolo Bonzini
2023-10-30 17:36     ` Paolo Bonzini
2023-10-30 17:36     ` Paolo Bonzini
2023-11-06 11:00   ` Fuad Tabba
2023-11-06 11:00     ` Fuad Tabba
2023-11-06 11:00     ` Fuad Tabba
2023-11-06 11:00     ` Fuad Tabba
2023-11-06 11:03     ` Paolo Bonzini
2023-11-06 11:03       ` Paolo Bonzini
2023-11-06 11:03       ` Paolo Bonzini
2023-11-06 11:03       ` Paolo Bonzini
2023-10-27 18:22 ` [PATCH v13 24/35] KVM: selftests: Drop unused kvm_userspace_memory_region_find() helper Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 25/35] KVM: selftests: Convert lib's mem regions to KVM_SET_USER_MEMORY_REGION2 Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2024-04-25 14:12   ` Dan Carpenter
2024-04-25 14:12     ` Dan Carpenter
2024-04-25 14:45     ` Shuah Khan
2024-04-25 14:45       ` Shuah Khan
2024-04-25 14:45       ` Shuah Khan
2024-04-25 15:09       ` Sean Christopherson
2024-04-25 15:09         ` Sean Christopherson
2024-04-25 16:22         ` Shuah Khan
2024-04-25 16:22           ` Shuah Khan
2024-04-26  7:33         ` Jarkko Sakkinen
2024-04-26  7:33           ` Jarkko Sakkinen
2024-04-26  7:33           ` Jarkko Sakkinen
2023-10-27 18:22 ` [PATCH v13 26/35] KVM: selftests: Add support for creating private memslots Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 27/35] KVM: selftests: Add helpers to convert guest memory b/w private and shared Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-11-06 11:26   ` Fuad Tabba
2023-11-06 11:26     ` Fuad Tabba
2023-11-06 11:26     ` Fuad Tabba
2023-11-06 11:26     ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 28/35] KVM: selftests: Add helpers to do KVM_HC_MAP_GPA_RANGE hypercalls (x86) Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 29/35] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 30/35] KVM: selftests: Add GUEST_SYNC[1-6] macros for synchronizing more data Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 31/35] KVM: selftests: Add x86-only selftest for private memory conversions Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 32/35] KVM: selftests: Add KVM_SET_USER_MEMORY_REGION2 helper Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 33/35] KVM: selftests: Expand set_memory_region_test to validate guest_memfd() Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 34/35] KVM: selftests: Add basic selftest for guest_memfd() Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 35/35] KVM: selftests: Test KVM exit behavior for private memory/access Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-27 18:22   ` Sean Christopherson
2023-10-30 17:39 ` [PATCH v13 00/35] KVM: guest_memfd() and per-page attributes Paolo Bonzini
2023-10-30 17:39   ` Paolo Bonzini
2023-10-30 17:39   ` Paolo Bonzini
2023-10-30 17:39   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABgObfa=DH7FySBviF63OS9sVog_wt-AqYgtUAGKqnY5Bizivw@mail.gmail.com' \
    --to=pbonzini@redhat.com \
    --cc=ackerleytng@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=amoorthy@google.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=brauner@kernel.org \
    --cc=chao.p.peng@linux.intel.com \
    --cc=chenhuacai@kernel.org \
    --cc=david@redhat.com \
    --cc=dmatlack@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=jarkko@kernel.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=liam.merwick@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mail@maciej.szmigiero.name \
    --cc=maz@kernel.org \
    --cc=mic@digikod.net \
    --cc=michael.roth@amd.com \
    --cc=mpe@ellerman.id.au \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=qperret@google.com \
    --cc=seanjc@google.com \
    --cc=tabba@google.com \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=wei.w.wang@intel.com \
    --cc=willy@infradead.org \
    --cc=xiaoyao.li@intel.com \
    --cc=yilun.xu@intel.com \
    --cc=yu.c.zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.