All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch v3 0/9] NUMA aware page table's pages allocation
@ 2022-12-22  2:34 Vipin Sharma
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
                   ` (8 more replies)
  0 siblings, 9 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

Hi,

This series has expanded from v2 based on the feedbacks. Main items happening in
this series are:

1. KVM MMU shrinker now shrinks KVM caches.
   MMU shrinker will free shadow page caches and split caches whenever shrinker
   is invoked.

2. Page table's pages are allocated on NUMA node during fault and split.
   Pages of page tables will be allocated based on the underlying physical page
   a page table entry is point to. Got performance improvement up to 150% in
   416 vCPUs VM during live migrations.

3. Cache size is reduced from 40 to 5.
   40 is current cache size for KVM memory caches. Reduced them to 5. I didn't
   see any negative performance impact while running perf and dirty_log_perf_test.
   I also saw lesser number of calls to get a free page.

Thanks
Vipin

v3:
- Split patches into smaller ones.
- Repurposed KVM MMU shrinker to free cache pages instead of oldest page table
  pages
- Reduced cache size from 40 to 5
- Removed __weak function and initializing node value in all architectures.
- Some name changes.

v2: https://lore.kernel.org/lkml/20221201195718.1409782-1-vipinsh@google.com/
- All page table pages will be allocated on underlying physical page's
  NUMA node.
- Introduced module parameter, numa_aware_pagetable, to disable this
  feature.
- Using kvm_pfn_to_refcounted_page to get page from a pfn.

v1: https://lore.kernel.org/all/20220801151928.270380-1-vipinsh@google.com/

Vipin Sharma (9):
  KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{}
  KVM: x86/mmu: Shrink split_shadow_page_cache via KVM MMU shrinker
  KVM: Add module param to make page tables NUMA aware
  KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on
    split
  KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  KVM: x86/mmu: Allocate page table's pages on NUMA node of the
    underlying pages
  KVM: x86/mmu: Make split_shadow_page_cache NUMA aware
  KVM: x86/mmu: Reduce default cache size in KVM from 40 to
    PT64_ROOT_MAX_LEVEL

 arch/arm64/kvm/arm.c             |   2 +-
 arch/arm64/kvm/mmu.c             |   4 +-
 arch/mips/kvm/mips.c             |   2 +
 arch/riscv/kvm/mmu.c             |   2 +-
 arch/riscv/kvm/vcpu.c            |   2 +-
 arch/x86/include/asm/kvm_host.h  |  15 +-
 arch/x86/include/asm/kvm_types.h |   2 +-
 arch/x86/kvm/mmu/mmu.c           | 282 +++++++++++++++++++------------
 arch/x86/kvm/mmu/mmu_internal.h  |   2 +
 arch/x86/kvm/mmu/paging_tmpl.h   |   4 +-
 arch/x86/kvm/mmu/tdp_mmu.c       |  24 ++-
 include/linux/kvm_host.h         |  27 +++
 include/linux/kvm_types.h        |   2 +
 virt/kvm/kvm_main.c              |  35 +++-
 14 files changed, 277 insertions(+), 128 deletions(-)

-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply	[flat|nested] 47+ messages in thread

* [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-27 18:37   ` Ben Gardon
                     ` (3 more replies)
  2022-12-22  2:34 ` [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{} Vipin Sharma
                   ` (7 subsequent siblings)
  8 siblings, 4 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

mmu_shrink_scan() is very disruptive to VMs. It picks the first
VM in the vm_list, zaps the oldest page which is most likely an upper
level SPTEs and most like to be reused. Prior to TDP MMU, this is even
more disruptive in nested VMs case, considering L1 SPTEs will be the
oldest even though most of the entries are for L2 SPTEs.

As discussed in
https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
shrinker logic has not be very useful in actually keeping VMs performant
and reducing memory usage.

Change mmu_shrink_scan() to free pages from the vCPU's shadow page
cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
VM's performance should not be affected.

This also allows to change cache capacities without worrying too much
about high memory usage in cache.

Tested this change by running dirty_log_perf_test while dropping cache
via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
logs from kvm_mmu_memory_cache_alloc(), which is expected.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/include/asm/kvm_host.h |   5 +
 arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
 arch/x86/kvm/mmu/mmu_internal.h |   2 +
 arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
 include/linux/kvm_host.h        |   1 +
 virt/kvm/kvm_main.c             |  11 ++-
 6 files changed, 114 insertions(+), 71 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index aa4eb8cfcd7e..89cc809e4a00 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
 	struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
 	struct kvm_mmu_memory_cache mmu_page_header_cache;
 
+	/*
+	 * Protects change in size of mmu_shadow_page_cache cache.
+	 */
+	spinlock_t mmu_shadow_page_cache_lock;
+
 	/*
 	 * QEMU userspace and the guest each have their own FPU state.
 	 * In vcpu_run, we switch between the user and guest FPU contexts.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 254bc46234e0..157417e1cb6e 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
 
 static struct kmem_cache *pte_list_desc_cache;
 struct kmem_cache *mmu_page_header_cache;
-static struct percpu_counter kvm_total_used_mmu_pages;
+/*
+ * Total number of unused pages in MMU shadow page cache.
+ */
+static struct percpu_counter kvm_total_unused_mmu_pages;
 
 static void mmu_spte_set(u64 *sptep, u64 spte);
 
@@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
 	}
 }
 
+static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
+				     spinlock_t *cache_lock)
+{
+	int orig_nobjs;
+	int r;
+
+	spin_lock(cache_lock);
+	orig_nobjs = cache->nobjs;
+	r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
+	if (orig_nobjs != cache->nobjs)
+		percpu_counter_add(&kvm_total_unused_mmu_pages,
+				   (cache->nobjs - orig_nobjs));
+	spin_unlock(cache_lock);
+	return r;
+}
+
 static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 {
 	int r;
@@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
 	if (r)
 		return r;
-	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
-				       PT64_ROOT_MAX_LEVEL);
+	r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
+				      &vcpu->arch.mmu_shadow_page_cache_lock);
 	if (r)
 		return r;
 	if (maybe_indirect) {
@@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 					  PT64_ROOT_MAX_LEVEL);
 }
 
+static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
+				     spinlock_t *cache_lock)
+{
+	int orig_nobjs;
+
+	spin_lock(cache_lock);
+	orig_nobjs = cache->nobjs;
+	kvm_mmu_free_memory_cache(cache);
+	if (orig_nobjs)
+		percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
+
+	spin_unlock(cache_lock);
+}
+
 static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
-	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
+	mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
+				 &vcpu->arch.mmu_shadow_page_cache_lock);
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
 }
@@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
 }
 #endif
 
-/*
- * This value is the sum of all of the kvm instances's
- * kvm->arch.n_used_mmu_pages values.  We need a global,
- * aggregate version in order to make the slab shrinker
- * faster
- */
-static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
-{
-	kvm->arch.n_used_mmu_pages += nr;
-	percpu_counter_add(&kvm_total_used_mmu_pages, nr);
-}
-
 static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
-	kvm_mod_used_mmu_pages(kvm, +1);
+	kvm->arch.n_used_mmu_pages++;
 	kvm_account_pgtable_pages((void *)sp->spt, +1);
 }
 
 static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
-	kvm_mod_used_mmu_pages(kvm, -1);
+	kvm->arch.n_used_mmu_pages--;
 	kvm_account_pgtable_pages((void *)sp->spt, -1);
 }
 
@@ -2150,8 +2172,31 @@ struct shadow_page_caches {
 	struct kvm_mmu_memory_cache *page_header_cache;
 	struct kvm_mmu_memory_cache *shadow_page_cache;
 	struct kvm_mmu_memory_cache *shadowed_info_cache;
+	/*
+	 * Protects change in size of shadow_page_cache cache.
+	 */
+	spinlock_t *shadow_page_cache_lock;
 };
 
+void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
+				    spinlock_t *cache_lock)
+{
+	int orig_nobjs;
+	void *page;
+
+	if (!cache_lock) {
+		spin_lock(cache_lock);
+		orig_nobjs = shadow_page_cache->nobjs;
+	}
+	page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
+	if (!cache_lock) {
+		if (orig_nobjs)
+			percpu_counter_dec(&kvm_total_unused_mmu_pages);
+		spin_unlock(cache_lock);
+	}
+	return page;
+}
+
 static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
 						      struct shadow_page_caches *caches,
 						      gfn_t gfn,
@@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
 	struct kvm_mmu_page *sp;
 
 	sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
-	sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
+	sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
+						caches->shadow_page_cache_lock);
 	if (!role.direct)
 		sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
 
@@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
 		.page_header_cache = &vcpu->arch.mmu_page_header_cache,
 		.shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
 		.shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
+		.shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
 	};
 
 	return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
@@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
 
 	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
 
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
@@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
 		kvm_tdp_mmu_zap_invalidated_roots(kvm);
 }
 
-static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
-{
-	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
-}
-
 static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
 			struct kvm_memory_slot *slot,
 			struct kvm_page_track_notifier_node *node)
@@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
 	/* Direct SPs do not require a shadowed_info_cache. */
 	caches.page_header_cache = &kvm->arch.split_page_header_cache;
 	caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
+	caches.shadow_page_cache_lock = NULL;
 
 	/* Safe to pass NULL for vCPU since requesting a direct SP. */
 	return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
@@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
 static unsigned long
 mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 {
-	struct kvm *kvm;
-	int nr_to_scan = sc->nr_to_scan;
+	struct kvm_mmu_memory_cache *cache;
+	struct kvm *kvm, *first_kvm = NULL;
 	unsigned long freed = 0;
+	/* spinlock for memory cache */
+	spinlock_t *cache_lock;
+	struct kvm_vcpu *vcpu;
+	unsigned long i;
 
 	mutex_lock(&kvm_lock);
 
 	list_for_each_entry(kvm, &vm_list, vm_list) {
-		int idx;
-		LIST_HEAD(invalid_list);
-
-		/*
-		 * Never scan more than sc->nr_to_scan VM instances.
-		 * Will not hit this condition practically since we do not try
-		 * to shrink more than one VM and it is very unlikely to see
-		 * !n_used_mmu_pages so many times.
-		 */
-		if (!nr_to_scan--)
+		if (first_kvm == kvm)
 			break;
-		/*
-		 * n_used_mmu_pages is accessed without holding kvm->mmu_lock
-		 * here. We may skip a VM instance errorneosly, but we do not
-		 * want to shrink a VM that only started to populate its MMU
-		 * anyway.
-		 */
-		if (!kvm->arch.n_used_mmu_pages &&
-		    !kvm_has_zapped_obsolete_pages(kvm))
-			continue;
+		if (!first_kvm)
+			first_kvm = kvm;
+		list_move_tail(&kvm->vm_list, &vm_list);
 
-		idx = srcu_read_lock(&kvm->srcu);
-		write_lock(&kvm->mmu_lock);
+		kvm_for_each_vcpu(i, vcpu, kvm) {
+			cache = &vcpu->arch.mmu_shadow_page_cache;
+			cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock;
+			if (READ_ONCE(cache->nobjs)) {
+				spin_lock(cache_lock);
+				freed += kvm_mmu_empty_memory_cache(cache);
+				spin_unlock(cache_lock);
+			}
 
-		if (kvm_has_zapped_obsolete_pages(kvm)) {
-			kvm_mmu_commit_zap_page(kvm,
-			      &kvm->arch.zapped_obsolete_pages);
-			goto unlock;
 		}
 
-		freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
-
-unlock:
-		write_unlock(&kvm->mmu_lock);
-		srcu_read_unlock(&kvm->srcu, idx);
-
-		/*
-		 * unfair on small ones
-		 * per-vm shrinkers cry out
-		 * sadness comes quickly
-		 */
-		list_move_tail(&kvm->vm_list, &vm_list);
-		break;
+		if (freed >= sc->nr_to_scan)
+			break;
 	}
 
+	if (freed)
+		percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
 	mutex_unlock(&kvm_lock);
+	percpu_counter_sync(&kvm_total_unused_mmu_pages);
 	return freed;
 }
 
 static unsigned long
 mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
 {
-	return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
+	return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);
 }
 
 static struct shrinker mmu_shrinker = {
@@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
 	if (!mmu_page_header_cache)
 		goto out;
 
-	if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
+	if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
 		goto out;
 
 	ret = register_shrinker(&mmu_shrinker, "x86-mmu");
@@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
 	return 0;
 
 out_shrinker:
-	percpu_counter_destroy(&kvm_total_used_mmu_pages);
+	percpu_counter_destroy(&kvm_total_unused_mmu_pages);
 out:
 	mmu_destroy_caches();
 	return ret;
@@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
 void kvm_mmu_vendor_module_exit(void)
 {
 	mmu_destroy_caches();
-	percpu_counter_destroy(&kvm_total_used_mmu_pages);
+	percpu_counter_destroy(&kvm_total_unused_mmu_pages);
 	unregister_shrinker(&mmu_shrinker);
 }
 
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index ac00bfbf32f6..c2a342028b6a 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
 void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
 void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
 
+void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
+				    spinlock_t *cache_lock);
 #endif /* __KVM_X86_MMU_INTERNAL_H */
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 764f7c87286f..4974fa96deff 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
 	struct kvm_mmu_page *sp;
 
 	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
-	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
+	sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
+						&vcpu->arch.mmu_shadow_page_cache_lock);
 
 	return sp;
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 01aad8b74162..efd9b38ea9a2 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
 int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
 int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
 int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
+int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
 void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
 void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
 #endif
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 13e88297f999..f2d762878b97 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
 	return mc->nobjs;
 }
 
-void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
+int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
 {
+	int freed = mc->nobjs;
+
 	while (mc->nobjs) {
 		if (mc->kmem_cache)
 			kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
@@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
 			free_page((unsigned long)mc->objects[--mc->nobjs]);
 	}
 
-	kvfree(mc->objects);
+	return freed;
+}
 
+void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
+{
+	kvm_mmu_empty_memory_cache(mc);
+	kvfree(mc->objects);
 	mc->objects = NULL;
 	mc->capacity = 0;
 }
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{}
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-29 21:59   ` David Matlack
  2022-12-22  2:34 ` [Patch v3 3/9] KVM: x86/mmu: Shrink split_shadow_page_cache via KVM MMU shrinker Vipin Sharma
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

zapped_obsolete_pages list was used in struct kvm_arch{} to provide
pages for KVM MMU shrinker. This is not needed now as KVM MMU shrinker
has been repurposed to free shadow page caches and not
zapped_obsolete_pages.

Remove zapped_obsolete_pages from struct kvm_arch{} and use local list
in kvm_zap_obsolete_pages().

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/include/asm/kvm_host.h | 1 -
 arch/x86/kvm/mmu/mmu.c          | 8 ++++----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 89cc809e4a00..f89f02e18080 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1215,7 +1215,6 @@ struct kvm_arch {
 	u8 mmu_valid_gen;
 	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
 	struct list_head active_mmu_pages;
-	struct list_head zapped_obsolete_pages;
 	/*
 	 * A list of kvm_mmu_page structs that, if zapped, could possibly be
 	 * replaced by an NX huge page.  A shadow page is on this list if its
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 157417e1cb6e..3364760a1695 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5987,6 +5987,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
 {
 	struct kvm_mmu_page *sp, *node;
 	int nr_zapped, batch = 0;
+	LIST_HEAD(zapped_pages);
 	bool unstable;
 
 restart:
@@ -6019,8 +6020,8 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
 			goto restart;
 		}
 
-		unstable = __kvm_mmu_prepare_zap_page(kvm, sp,
-				&kvm->arch.zapped_obsolete_pages, &nr_zapped);
+		unstable = __kvm_mmu_prepare_zap_page(kvm, sp, &zapped_pages,
+						      &nr_zapped);
 		batch += nr_zapped;
 
 		if (unstable)
@@ -6036,7 +6037,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
 	 * kvm_mmu_load()), and the reload in the caller ensure no vCPUs are
 	 * running with an obsolete MMU.
 	 */
-	kvm_mmu_commit_zap_page(kvm, &kvm->arch.zapped_obsolete_pages);
+	kvm_mmu_commit_zap_page(kvm, &zapped_pages);
 }
 
 /*
@@ -6112,7 +6113,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	int r;
 
 	INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
-	INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
 	INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
 	spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
 
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 3/9] KVM: x86/mmu: Shrink split_shadow_page_cache via KVM MMU shrinker
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
  2022-12-22  2:34 ` [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{} Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-22  2:34 ` [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware Vipin Sharma
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

split_shadow_page_cache is not used after dirty log is disabled. It is a
good candidate to free memory in case of mmu_shrink_scan kicks in.

Account for split_shadow_page_cache via kvm_total_unused_mmu_pages and
use it in mmu_shrink_scan.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/include/asm/kvm_host.h |  5 +++
 arch/x86/kvm/mmu/mmu.c          | 63 +++++++++++++++++++--------------
 2 files changed, 42 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f89f02e18080..293994fabae3 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1413,6 +1413,11 @@ struct kvm_arch {
 	struct kvm_mmu_memory_cache split_shadow_page_cache;
 	struct kvm_mmu_memory_cache split_page_header_cache;
 
+	/*
+	 * Protects change in size of split_shadow_page_cache cache.
+	 */
+	spinlock_t split_shadow_page_cache_lock;
+
 	/*
 	 * Memory cache used to allocate pte_list_desc structs while splitting
 	 * huge pages. In the worst case, to split one huge page, 512
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3364760a1695..6f6a10d7a871 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -659,14 +659,15 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
 }
 
 static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
-				     spinlock_t *cache_lock)
+				     spinlock_t *cache_lock,
+				     int min)
 {
 	int orig_nobjs;
 	int r;
 
 	spin_lock(cache_lock);
 	orig_nobjs = cache->nobjs;
-	r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
+	r = kvm_mmu_topup_memory_cache(cache, min);
 	if (orig_nobjs != cache->nobjs)
 		percpu_counter_add(&kvm_total_unused_mmu_pages,
 				   (cache->nobjs - orig_nobjs));
@@ -684,7 +685,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 	if (r)
 		return r;
 	r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
-				      &vcpu->arch.mmu_shadow_page_cache_lock);
+				      &vcpu->arch.mmu_shadow_page_cache_lock,
+				      PT64_ROOT_MAX_LEVEL);
 	if (r)
 		return r;
 	if (maybe_indirect) {
@@ -2184,16 +2186,12 @@ void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cac
 	int orig_nobjs;
 	void *page;
 
-	if (!cache_lock) {
-		spin_lock(cache_lock);
-		orig_nobjs = shadow_page_cache->nobjs;
-	}
+	spin_lock(cache_lock);
+	orig_nobjs = shadow_page_cache->nobjs;
 	page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
-	if (!cache_lock) {
-		if (orig_nobjs)
-			percpu_counter_dec(&kvm_total_unused_mmu_pages);
-		spin_unlock(cache_lock);
-	}
+	if (orig_nobjs)
+		percpu_counter_dec(&kvm_total_unused_mmu_pages);
+	spin_unlock(cache_lock);
 	return page;
 }
 
@@ -6130,6 +6128,7 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
 
 	kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
 
 	kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
 	kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
@@ -6141,7 +6140,8 @@ static void mmu_free_vm_memory_caches(struct kvm *kvm)
 {
 	kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache);
 	kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache);
-	kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache);
+	mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
+				 &kvm->arch.split_shadow_page_cache_lock);
 }
 
 void kvm_mmu_uninit_vm(struct kvm *kvm)
@@ -6295,7 +6295,9 @@ static int topup_split_caches(struct kvm *kvm)
 	if (r)
 		return r;
 
-	return kvm_mmu_topup_memory_cache(&kvm->arch.split_shadow_page_cache, 1);
+	return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
+					 &kvm->arch.split_shadow_page_cache_lock,
+					 1);
 }
 
 static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep)
@@ -6320,7 +6322,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
 	/* Direct SPs do not require a shadowed_info_cache. */
 	caches.page_header_cache = &kvm->arch.split_page_header_cache;
 	caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
-	caches.shadow_page_cache_lock = NULL;
+	caches.shadow_page_cache_lock = &kvm->arch.split_shadow_page_cache_lock;
 
 	/* Safe to pass NULL for vCPU since requesting a direct SP. */
 	return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
@@ -6687,14 +6689,23 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
 	}
 }
 
+static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
+				      spinlock_t *cache_lock)
+{
+	unsigned long freed = 0;
+
+	spin_lock(cache_lock);
+	if (cache->nobjs)
+		freed = kvm_mmu_empty_memory_cache(cache);
+	spin_unlock(cache_lock);
+	return freed;
+}
+
 static unsigned long
 mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 {
-	struct kvm_mmu_memory_cache *cache;
 	struct kvm *kvm, *first_kvm = NULL;
 	unsigned long freed = 0;
-	/* spinlock for memory cache */
-	spinlock_t *cache_lock;
 	struct kvm_vcpu *vcpu;
 	unsigned long i;
 
@@ -6707,15 +6718,15 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 			first_kvm = kvm;
 		list_move_tail(&kvm->vm_list, &vm_list);
 
-		kvm_for_each_vcpu(i, vcpu, kvm) {
-			cache = &vcpu->arch.mmu_shadow_page_cache;
-			cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock;
-			if (READ_ONCE(cache->nobjs)) {
-				spin_lock(cache_lock);
-				freed += kvm_mmu_empty_memory_cache(cache);
-				spin_unlock(cache_lock);
-			}
+		freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
+					  &kvm->arch.split_shadow_page_cache_lock);
 
+		if (freed >= sc->nr_to_scan)
+			break;
+
+		kvm_for_each_vcpu(i, vcpu, kvm) {
+			freed += mmu_shrink_cache(&vcpu->arch.mmu_shadow_page_cache,
+						  &vcpu->arch.mmu_shadow_page_cache_lock);
 		}
 
 		if (freed >= sc->nr_to_scan)
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
                   ` (2 preceding siblings ...)
  2022-12-22  2:34 ` [Patch v3 3/9] KVM: x86/mmu: Shrink split_shadow_page_cache via KVM MMU shrinker Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-29 22:05   ` David Matlack
  2022-12-22  2:34 ` [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split Vipin Sharma
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

Add a numa_aware_page_table module param to make page tables NUMA aware.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 include/linux/kvm_host.h |  2 ++
 virt/kvm/kvm_main.c      | 22 ++++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index efd9b38ea9a2..d48064503b88 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1358,6 +1358,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible);
 
 void kvm_flush_remote_tlbs(struct kvm *kvm);
 
+void *kvm_mmu_get_free_page(int nid, gfp_t gfp);
+
 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
 int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
 int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f2d762878b97..d96c8146e9ba 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -93,6 +93,13 @@ unsigned int halt_poll_ns_shrink;
 module_param(halt_poll_ns_shrink, uint, 0644);
 EXPORT_SYMBOL_GPL(halt_poll_ns_shrink);
 
+/*
+ * If possible, allocate page table's pages on the same node the underlying
+ * physical page is pointing to.
+ */
+static bool __read_mostly numa_aware_pagetable = true;
+module_param_named(numa_aware_pagetable, numa_aware_pagetable, bool, 0644);
+
 /*
  * Ordering of locks:
  *
@@ -384,6 +391,21 @@ static void kvm_flush_shadow_all(struct kvm *kvm)
 	kvm_arch_guest_memory_reclaimed(kvm);
 }
 
+void *kvm_mmu_get_free_page(int nid, gfp_t gfp)
+{
+	#ifdef CONFIG_NUMA
+	struct page *spt_page;
+
+	if (numa_aware_pagetable) {
+		spt_page = alloc_pages_node(nid, gfp, 0);
+		if (spt_page)
+			return page_address(spt_page);
+	}
+	#endif // CONFIG_NUMA
+
+	return (void *)__get_free_page(gfp);
+}
+
 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
 static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
 					       gfp_t gfp_flags)
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
                   ` (3 preceding siblings ...)
  2022-12-22  2:34 ` [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-27 19:02   ` Ben Gardon
  2022-12-29 22:30   ` David Matlack
  2022-12-22  2:34 ` [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{} Vipin Sharma
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

When dirty log is enabled, huge pages are split. Page table's pages
during the split are allocated based on the current thread NUMA node or
mempolicy. This causes inefficient page table accesses if underlying
page is on a different NUMA node

Allocate page table's pages on the same NUMA node as the underlying huge
page when dirty log is enabled and huge pages are split.

The performance gain during the pre-copy phase of live migrations of a
416 vCPUs and 11 TiB memory VM  on a 8 node host was seen in the range
of 130% to 150%.

Suggested-by: David Matlack <dmatlack@google.com>
Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++++----
 include/linux/kvm_host.h   | 18 ++++++++++++++++++
 2 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 4974fa96deff..376b8dceb3f9 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1403,7 +1403,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
 	return spte_set;
 }
 
-static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
+static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(int nid, gfp_t gfp)
 {
 	struct kvm_mmu_page *sp;
 
@@ -1413,7 +1413,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
 	if (!sp)
 		return NULL;
 
-	sp->spt = (void *)__get_free_page(gfp);
+	sp->spt = kvm_mmu_get_free_page(nid, gfp);
+
 	if (!sp->spt) {
 		kmem_cache_free(mmu_page_header_cache, sp);
 		return NULL;
@@ -1427,6 +1428,9 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 						       bool shared)
 {
 	struct kvm_mmu_page *sp;
+	int nid;
+
+	nid = kvm_pfn_to_page_table_nid(spte_to_pfn(iter->old_spte));
 
 	/*
 	 * Since we are allocating while under the MMU lock we have to be
@@ -1437,7 +1441,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 	 * If this allocation fails we drop the lock and retry with reclaim
 	 * allowed.
 	 */
-	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
+	sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_NOWAIT | __GFP_ACCOUNT);
 	if (sp)
 		return sp;
 
@@ -1449,7 +1453,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 
 	iter->yielded = true;
-	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
+	sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_KERNEL_ACCOUNT);
 
 	if (shared)
 		read_lock(&kvm->mmu_lock);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index d48064503b88..a262e15ebd19 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1583,6 +1583,24 @@ void kvm_arch_sync_events(struct kvm *kvm);
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
 
 struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn);
+
+/*
+ * Tells the appropriate NUMA node location of the page table's page based on
+ * pfn it will point to.
+ *
+ * Return the nid of the page if pfn is valid and backed by a refcounted page,
+ * otherwise, return the nearest memory node for the current CPU.
+ */
+static inline int kvm_pfn_to_page_table_nid(kvm_pfn_t pfn)
+{
+	struct page *page = kvm_pfn_to_refcounted_page(pfn);
+
+	if (page)
+		return page_to_nid(page);
+	else
+		return numa_mem_id();
+}
+
 bool kvm_is_zone_device_page(struct page *page);
 
 struct kvm_irq_ack_notifier {
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
                   ` (4 preceding siblings ...)
  2022-12-22  2:34 ` [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-27 19:09   ` Ben Gardon
  2022-12-29 23:08   ` David Matlack
  2022-12-22  2:34 ` [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages Vipin Sharma
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
this cache should allocate memory from. Default initialize to
NUMA_NO_NODE in all architectures.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/arm64/kvm/arm.c      |  2 +-
 arch/arm64/kvm/mmu.c      |  4 +++-
 arch/mips/kvm/mips.c      |  2 ++
 arch/riscv/kvm/mmu.c      |  2 +-
 arch/riscv/kvm/vcpu.c     |  2 +-
 arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
 include/linux/kvm_host.h  |  6 ++++++
 include/linux/kvm_types.h |  2 ++
 8 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9c5573bc4614..52a41f4532e2 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.target = -1;
 	bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
 
-	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
 
 	/*
 	 * Default value for the FP state, will be overloaded at load
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 31d7fa4c7c14..bd07155e17fa 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 {
 	phys_addr_t addr;
 	int ret = 0;
-	struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
+	struct kvm_mmu_memory_cache cache;
 	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
 	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
 				     KVM_PGTABLE_PROT_R |
 				     (writable ? KVM_PGTABLE_PROT_W : 0);
 
+	INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
+
 	if (is_protected_kvm_enabled())
 		return -EPERM;
 
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index a25e0b73ee70..b017c29a9340 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 		     HRTIMER_MODE_REL);
 	vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
 
+	vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
+
 	/*
 	 * Allocate space for host mode exception handlers that handle
 	 * guest mode exits
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 34b57e0be2ef..119de4520cc6 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
 	phys_addr_t addr, end;
 	struct kvm_mmu_memory_cache pcache = {
 		.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
-		.gfp_zero = __GFP_ZERO,
 	};
 
+	INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
 	end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
 	pfn = __phys_to_pfn(hpa);
 
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 7c08567097f0..189b14feb365 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 
 	/* Mark this VCPU never ran */
 	vcpu->arch.ran_atleast_once = false;
-	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
 	bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
 
 	/* Setup ISA features available to VCPU */
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6f6a10d7a871..23a3b82b2384 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 {
 	int ret;
 
-	vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
-	vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
+				  pte_list_desc_cache, NUMA_NO_NODE);
 
-	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
-	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
+				  mmu_page_header_cache, NUMA_NO_NODE);
 
-	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
+				  NULL, NUMA_NO_NODE);
 	spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
 
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
@@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
 	kvm_page_track_register_notifier(kvm, node);
 
-	kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
-	kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
+				  mmu_page_header_cache, NUMA_NO_NODE);
 
-	kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
+				  NULL, NUMA_NO_NODE);
 	spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
 
-	kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
-	kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
+	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
+				  pte_list_desc_cache, NUMA_NO_NODE);
 
 	return 0;
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a262e15ebd19..719687a37ef7 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
 /* Max number of entries allowed for each kvm dirty ring */
 #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
 
+#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({	\
+	(_cache)->kmem_cache = _kmem_cache;				\
+	(_cache)->gfp_zero = __GFP_ZERO;				\
+	(_cache)->node = _node;						\
+})
+
 #endif
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 76de36e56cdf..9c70ce95e51f 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
 	struct kmem_cache *kmem_cache;
 	int capacity;
 	void **objects;
+	/* Node on which memory should be allocated by default */
+	int node;
 };
 #endif
 
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
                   ` (5 preceding siblings ...)
  2022-12-22  2:34 ` [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{} Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-27 19:34   ` Ben Gardon
  2022-12-22  2:34 ` [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware Vipin Sharma
  2022-12-22  2:34 ` [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL Vipin Sharma
  8 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

Page table pages of a VM are currently allocated based on the current
task's NUMA node or its mempolicy. This can cause suboptimal remote
accesses by the vCPU if it is accessing physical pages local to its NUMA
node but the page table pages mapping those physcal pages were created
by some other vCPU which was on different NUMA node or had different
policy.

Allocate page table pages on the same NUMA node where underlying
physical page exists. Page table at level 5, 4, and 3 might not end up
on the same NUMA node as they can span multiple NUMA nodes.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/mmu/mmu.c          | 63 ++++++++++++++++++++++-----------
 arch/x86/kvm/mmu/paging_tmpl.h  |  4 +--
 arch/x86/kvm/mmu/tdp_mmu.c      | 11 +++---
 virt/kvm/kvm_main.c             |  2 +-
 5 files changed, 53 insertions(+), 29 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 293994fabae3..b1f319ad6f89 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -782,7 +782,7 @@ struct kvm_vcpu_arch {
 	struct kvm_mmu *walk_mmu;
 
 	struct kvm_mmu_memory_cache mmu_pte_list_desc_cache;
-	struct kvm_mmu_memory_cache mmu_shadow_page_cache;
+	struct kvm_mmu_memory_cache mmu_shadow_page_cache[MAX_NUMNODES];
 	struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
 	struct kvm_mmu_memory_cache mmu_page_header_cache;
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 23a3b82b2384..511c6ef265ee 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -677,24 +677,29 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
 
 static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 {
-	int r;
+	int r, nid;
 
 	/* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
 	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
 				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
 	if (r)
 		return r;
-	r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
-				      &vcpu->arch.mmu_shadow_page_cache_lock,
-				      PT64_ROOT_MAX_LEVEL);
-	if (r)
-		return r;
+
+	for_each_online_node(nid) {
+		r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
+					      &vcpu->arch.mmu_shadow_page_cache_lock,
+					      PT64_ROOT_MAX_LEVEL);
+		if (r)
+			return r;
+	}
+
 	if (maybe_indirect) {
 		r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_info_cache,
 					       PT64_ROOT_MAX_LEVEL);
 		if (r)
 			return r;
 	}
+
 	return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache,
 					  PT64_ROOT_MAX_LEVEL);
 }
@@ -715,9 +720,14 @@ static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
 
 static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
 {
+	int nid;
+
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
-	mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
-				 &vcpu->arch.mmu_shadow_page_cache_lock);
+
+	for_each_node(nid)
+		mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
+					 &vcpu->arch.mmu_shadow_page_cache_lock);
+
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
 	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
 }
@@ -2256,11 +2266,12 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
 
 static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
 						    gfn_t gfn,
-						    union kvm_mmu_page_role role)
+						    union kvm_mmu_page_role role,
+						    int nid)
 {
 	struct shadow_page_caches caches = {
 		.page_header_cache = &vcpu->arch.mmu_page_header_cache,
-		.shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
+		.shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache[nid],
 		.shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
 		.shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
 	};
@@ -2316,15 +2327,19 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,
 
 static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu,
 						 u64 *sptep, gfn_t gfn,
-						 bool direct, unsigned int access)
+						 bool direct, unsigned int access,
+						 kvm_pfn_t pfn)
 {
 	union kvm_mmu_page_role role;
+	int nid;
 
 	if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
 		return ERR_PTR(-EEXIST);
 
 	role = kvm_mmu_child_role(sptep, direct, access);
-	return kvm_mmu_get_shadow_page(vcpu, gfn, role);
+	nid = kvm_pfn_to_page_table_nid(pfn);
+
+	return kvm_mmu_get_shadow_page(vcpu, gfn, role, nid);
 }
 
 static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator,
@@ -3208,7 +3223,8 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 		if (it.level == fault->goal_level)
 			break;
 
-		sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL);
+		sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true,
+					  ACC_ALL, fault->pfn);
 		if (sp == ERR_PTR(-EEXIST))
 			continue;
 
@@ -3636,7 +3652,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant,
 	WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte);
 	WARN_ON_ONCE(role.direct && role.has_4_byte_gpte);
 
-	sp = kvm_mmu_get_shadow_page(vcpu, gfn, role);
+	sp = kvm_mmu_get_shadow_page(vcpu, gfn, role, numa_mem_id());
 	++sp->root_count;
 
 	return __pa(sp->spt);
@@ -5952,7 +5968,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
 
 int kvm_mmu_create(struct kvm_vcpu *vcpu)
 {
-	int ret;
+	int ret, nid;
 
 	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
 				  pte_list_desc_cache, NUMA_NO_NODE);
@@ -5960,8 +5976,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
 				  mmu_page_header_cache, NUMA_NO_NODE);
 
-	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
-				  NULL, NUMA_NO_NODE);
+	for_each_node(nid)
+		INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache[nid],
+					  NULL, nid);
 	spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
 
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
@@ -6692,13 +6709,17 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
 }
 
 static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
+				      int cache_count,
 				      spinlock_t *cache_lock)
 {
 	unsigned long freed = 0;
+	int nid;
 
 	spin_lock(cache_lock);
-	if (cache->nobjs)
-		freed = kvm_mmu_empty_memory_cache(cache);
+	for (nid = 0; nid < cache_count; nid++) {
+		if (node_online(nid) && cache[nid].nobjs)
+			freed += kvm_mmu_empty_memory_cache(&cache[nid]);
+	}
 	spin_unlock(cache_lock);
 	return freed;
 }
@@ -6721,13 +6742,15 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		list_move_tail(&kvm->vm_list, &vm_list);
 
 		freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
+					  1,
 					  &kvm->arch.split_shadow_page_cache_lock);
 
 		if (freed >= sc->nr_to_scan)
 			break;
 
 		kvm_for_each_vcpu(i, vcpu, kvm) {
-			freed += mmu_shrink_cache(&vcpu->arch.mmu_shadow_page_cache,
+			freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
+						  MAX_NUMNODES,
 						  &vcpu->arch.mmu_shadow_page_cache_lock);
 		}
 
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index e5662dbd519c..1ceca62ec4cf 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -652,7 +652,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
 		table_gfn = gw->table_gfn[it.level - 2];
 		access = gw->pt_access[it.level - 2];
 		sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn,
-					  false, access);
+					  false, access, fault->pfn);
 
 		if (sp != ERR_PTR(-EEXIST)) {
 			/*
@@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
 		validate_direct_spte(vcpu, it.sptep, direct_access);
 
 		sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn,
-					  true, direct_access);
+					  true, direct_access, fault->pfn);
 		if (sp == ERR_PTR(-EEXIST))
 			continue;
 
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 376b8dceb3f9..b5abae2366dd 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -259,12 +259,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
 		    kvm_mmu_page_as_id(_root) != _as_id) {		\
 		} else
 
-static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
+static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu, int nid)
 {
 	struct kvm_mmu_page *sp;
 
 	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
-	sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
+	sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache[nid],
 						&vcpu->arch.mmu_shadow_page_cache_lock);
 
 	return sp;
@@ -317,7 +317,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
 			goto out;
 	}
 
-	root = tdp_mmu_alloc_sp(vcpu);
+	root = tdp_mmu_alloc_sp(vcpu, numa_mem_id());
 	tdp_mmu_init_sp(root, NULL, 0, role);
 
 	refcount_set(&root->tdp_mmu_root_count, 1);
@@ -1149,7 +1149,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 	struct kvm *kvm = vcpu->kvm;
 	struct tdp_iter iter;
 	struct kvm_mmu_page *sp;
-	int ret = RET_PF_RETRY;
+	int ret = RET_PF_RETRY, nid;
 
 	kvm_mmu_hugepage_adjust(vcpu, fault);
 
@@ -1178,11 +1178,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 		    !is_large_pte(iter.old_spte))
 			continue;
 
+		nid = kvm_pfn_to_page_table_nid(fault->pfn);
 		/*
 		 * The SPTE is either non-present or points to a huge page that
 		 * needs to be split.
 		 */
-		sp = tdp_mmu_alloc_sp(vcpu);
+		sp = tdp_mmu_alloc_sp(vcpu, nid);
 		tdp_mmu_init_child_sp(sp, &iter);
 
 		sp->nx_huge_page_disallowed = fault->huge_page_disallowed;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d96c8146e9ba..4f3db7ffeba8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -415,7 +415,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
 	if (mc->kmem_cache)
 		return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
 	else
-		return (void *)__get_free_page(gfp_flags);
+		return kvm_mmu_get_free_page(mc->node, gfp_flags);
 }
 
 int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
                   ` (6 preceding siblings ...)
  2022-12-22  2:34 ` [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-27 19:42   ` Ben Gardon
  2022-12-29 23:18   ` David Matlack
  2022-12-22  2:34 ` [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL Vipin Sharma
  8 siblings, 2 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

Make split_shadow_page_cache NUMA aware and allocate page table's pages
during the split based on the underlying physical page's NUMA node.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/mmu/mmu.c          | 50 ++++++++++++++++++---------------
 2 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1f319ad6f89..7b3f36ae37a4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1410,7 +1410,7 @@ struct kvm_arch {
 	 *
 	 * Protected by kvm->slots_lock.
 	 */
-	struct kvm_mmu_memory_cache split_shadow_page_cache;
+	struct kvm_mmu_memory_cache split_shadow_page_cache[MAX_NUMNODES];
 	struct kvm_mmu_memory_cache split_page_header_cache;
 
 	/*
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 511c6ef265ee..7454bfc49a51 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6126,7 +6126,7 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
 int kvm_mmu_init_vm(struct kvm *kvm)
 {
 	struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
-	int r;
+	int r, nid;
 
 	INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
 	INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
@@ -6145,8 +6145,9 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
 				  mmu_page_header_cache, NUMA_NO_NODE);
 
-	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
-				  NULL, NUMA_NO_NODE);
+	for_each_node(nid)
+		INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache[nid],
+					  NULL, NUMA_NO_NODE);
 	spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
 
 	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
@@ -6157,10 +6158,13 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 
 static void mmu_free_vm_memory_caches(struct kvm *kvm)
 {
+	int nid;
+
 	kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache);
 	kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache);
-	mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
-				 &kvm->arch.split_shadow_page_cache_lock);
+	for_each_node(nid)
+		mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid],
+					 &kvm->arch.split_shadow_page_cache_lock);
 }
 
 void kvm_mmu_uninit_vm(struct kvm *kvm)
@@ -6269,7 +6273,7 @@ static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min)
 	return kvm_mmu_memory_cache_nr_free_objects(cache) < min;
 }
 
-static bool need_topup_split_caches_or_resched(struct kvm *kvm)
+static bool need_topup_split_caches_or_resched(struct kvm *kvm, int nid)
 {
 	if (need_resched() || rwlock_needbreak(&kvm->mmu_lock))
 		return true;
@@ -6281,10 +6285,10 @@ static bool need_topup_split_caches_or_resched(struct kvm *kvm)
 	 */
 	return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_MIN_NR_OBJECTS) ||
 	       need_topup(&kvm->arch.split_page_header_cache, 1) ||
-	       need_topup(&kvm->arch.split_shadow_page_cache, 1);
+	       need_topup(&kvm->arch.split_shadow_page_cache[nid], 1);
 }
 
-static int topup_split_caches(struct kvm *kvm)
+static int topup_split_caches(struct kvm *kvm, int nid)
 {
 	/*
 	 * Allocating rmap list entries when splitting huge pages for nested
@@ -6314,18 +6318,21 @@ static int topup_split_caches(struct kvm *kvm)
 	if (r)
 		return r;
 
-	return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
+	return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid],
 					 &kvm->arch.split_shadow_page_cache_lock,
 					 1);
 }
 
-static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep)
+static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm,
+							u64 *huge_sptep,
+							u64 huge_spte)
 {
 	struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
 	struct shadow_page_caches caches = {};
 	union kvm_mmu_page_role role;
 	unsigned int access;
 	gfn_t gfn;
+	int nid;
 
 	gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
 	access = kvm_mmu_page_get_access(huge_sp, spte_index(huge_sptep));
@@ -6338,9 +6345,11 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
 	 */
 	role = kvm_mmu_child_role(huge_sptep, /*direct=*/true, access);
 
+	nid = kvm_pfn_to_page_table_nid(spte_to_pfn(huge_spte));
+
 	/* Direct SPs do not require a shadowed_info_cache. */
 	caches.page_header_cache = &kvm->arch.split_page_header_cache;
-	caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
+	caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache[nid];
 	caches.shadow_page_cache_lock = &kvm->arch.split_shadow_page_cache_lock;
 
 	/* Safe to pass NULL for vCPU since requesting a direct SP. */
@@ -6360,7 +6369,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm,
 	gfn_t gfn;
 	int index;
 
-	sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep);
+	sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep, huge_spte);
 
 	for (index = 0; index < SPTE_ENT_PER_PAGE; index++) {
 		sptep = &sp->spt[index];
@@ -6398,7 +6407,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
 					  u64 *huge_sptep)
 {
 	struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
-	int level, r = 0;
+	int level, r = 0, nid;
 	gfn_t gfn;
 	u64 spte;
 
@@ -6406,13 +6415,14 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
 	gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
 	level = huge_sp->role.level;
 	spte = *huge_sptep;
+	nid = kvm_pfn_to_page_table_nid(spte_to_pfn(spte));
 
 	if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) {
 		r = -ENOSPC;
 		goto out;
 	}
 
-	if (need_topup_split_caches_or_resched(kvm)) {
+	if (need_topup_split_caches_or_resched(kvm, nid)) {
 		write_unlock(&kvm->mmu_lock);
 		cond_resched();
 		/*
@@ -6420,7 +6430,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
 		 * rmap iterator should be restarted because the MMU lock was
 		 * dropped.
 		 */
-		r = topup_split_caches(kvm) ?: -EAGAIN;
+		r = topup_split_caches(kvm, nid) ?: -EAGAIN;
 		write_lock(&kvm->mmu_lock);
 		goto out;
 	}
@@ -6709,17 +6719,15 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
 }
 
 static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
-				      int cache_count,
 				      spinlock_t *cache_lock)
 {
 	unsigned long freed = 0;
 	int nid;
 
 	spin_lock(cache_lock);
-	for (nid = 0; nid < cache_count; nid++) {
-		if (node_online(nid) && cache[nid].nobjs)
+	for_each_online_node(nid)
+		if (cache[nid].nobjs)
 			freed += kvm_mmu_empty_memory_cache(&cache[nid]);
-	}
 	spin_unlock(cache_lock);
 	return freed;
 }
@@ -6741,8 +6749,7 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 			first_kvm = kvm;
 		list_move_tail(&kvm->vm_list, &vm_list);
 
-		freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
-					  1,
+		freed += mmu_shrink_cache(kvm->arch.split_shadow_page_cache,
 					  &kvm->arch.split_shadow_page_cache_lock);
 
 		if (freed >= sc->nr_to_scan)
@@ -6750,7 +6757,6 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 
 		kvm_for_each_vcpu(i, vcpu, kvm) {
 			freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
-						  MAX_NUMNODES,
 						  &vcpu->arch.mmu_shadow_page_cache_lock);
 		}
 
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL
  2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
                   ` (7 preceding siblings ...)
  2022-12-22  2:34 ` [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware Vipin Sharma
@ 2022-12-22  2:34 ` Vipin Sharma
  2022-12-27 19:52   ` Ben Gardon
  8 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-22  2:34 UTC (permalink / raw)
  To: seanjc, pbonzini, bgardon, dmatlack; +Cc: kvm, linux-kernel, Vipin Sharma

KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE is set to 40 without any specific
reason. Reduce default size to PT64_ROOT_MAX_LEVEL, which is currently
5.

Change mmu_pte_list_desc_cache size to what is needed as it is more than
5 but way less than 40.

Tested by running dirty_log_perf_test on both tdp and shadow MMU with 48
vcpu and 2GB/vcpu size on a 2 NUMA node machine. No impact on
performance noticed.

Ran perf on dirty_log_perf_test and found kvm_mmu_get_free_page() calls
reduced by ~3300 which is near to 48 (vcpus) * 2 (nodes) * 35 (cache
size).

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/include/asm/kvm_types.h | 2 +-
 arch/x86/kvm/mmu/mmu.c           | 7 ++++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_types.h b/arch/x86/include/asm/kvm_types.h
index 08f1b57d3b62..752dab218a62 100644
--- a/arch/x86/include/asm/kvm_types.h
+++ b/arch/x86/include/asm/kvm_types.h
@@ -2,6 +2,6 @@
 #ifndef _ASM_X86_KVM_TYPES_H
 #define _ASM_X86_KVM_TYPES_H
 
-#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40
+#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE PT64_ROOT_MAX_LEVEL
 
 #endif /* _ASM_X86_KVM_TYPES_H */
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 7454bfc49a51..f89d933ff380 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -677,11 +677,12 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
 
 static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
 {
-	int r, nid;
+	int r, nid, desc_capacity;
 
 	/* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
-	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
-				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
+	desc_capacity = 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM;
+	r = __kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
+					 desc_capacity, desc_capacity);
 	if (r)
 		return r;
 
-- 
2.39.0.314.g84b9a713c41-goog


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
@ 2022-12-27 18:37   ` Ben Gardon
  2022-12-28 22:07     ` Vipin Sharma
  2022-12-29 21:54   ` David Matlack
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-27 18:37 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> mmu_shrink_scan() is very disruptive to VMs. It picks the first
> VM in the vm_list, zaps the oldest page which is most likely an upper
> level SPTEs and most like to be reused. Prior to TDP MMU, this is even
> more disruptive in nested VMs case, considering L1 SPTEs will be the
> oldest even though most of the entries are for L2 SPTEs.
>
> As discussed in
> https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
> shrinker logic has not be very useful in actually keeping VMs performant
> and reducing memory usage.
>
> Change mmu_shrink_scan() to free pages from the vCPU's shadow page
> cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
> VM's performance should not be affected.
>
> This also allows to change cache capacities without worrying too much
> about high memory usage in cache.
>
> Tested this change by running dirty_log_perf_test while dropping cache
> via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> logs from kvm_mmu_memory_cache_alloc(), which is expected.

Oh, that's not a good thing. I don't think we want to be hitting those
warnings. For one, kernel warnings should not be expected behavior,
probably for many reasons, but at least because Syzbot will find it.
In this particular case, we don't want to hit that because in that
case we'll try to do a GFP_ATOMIC, which can fail, and if it fails,
we'll BUG:

void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
{
        void *p;

        if (WARN_ON(!mc->nobjs))
                p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT);
        else
                p = mc->objects[--mc->nobjs];
        BUG_ON(!p);
        return p;
}

Perhaps the risk of actually panicking is small, but it probably
indicates that we need better error handling around failed allocations
from the cache.
Or, the slightly less elegant approach might be to just hold the cache
lock around the cache topup and use of pages from the cache, but
adding better error handling would probably be cleaner.

>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   5 +
>  arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
>  arch/x86/kvm/mmu/mmu_internal.h |   2 +
>  arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
>  include/linux/kvm_host.h        |   1 +
>  virt/kvm/kvm_main.c             |  11 ++-
>  6 files changed, 114 insertions(+), 71 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index aa4eb8cfcd7e..89cc809e4a00 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
>         struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
>         struct kvm_mmu_memory_cache mmu_page_header_cache;
>
> +       /*
> +        * Protects change in size of mmu_shadow_page_cache cache.
> +        */
> +       spinlock_t mmu_shadow_page_cache_lock;
> +
>         /*
>          * QEMU userspace and the guest each have their own FPU state.
>          * In vcpu_run, we switch between the user and guest FPU contexts.
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 254bc46234e0..157417e1cb6e 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
>
>  static struct kmem_cache *pte_list_desc_cache;
>  struct kmem_cache *mmu_page_header_cache;
> -static struct percpu_counter kvm_total_used_mmu_pages;
> +/*
> + * Total number of unused pages in MMU shadow page cache.
> + */
> +static struct percpu_counter kvm_total_unused_mmu_pages;
>
>  static void mmu_spte_set(u64 *sptep, u64 spte);
>
> @@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
>         }
>  }
>
> +static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> +                                    spinlock_t *cache_lock)
> +{
> +       int orig_nobjs;
> +       int r;
> +
> +       spin_lock(cache_lock);
> +       orig_nobjs = cache->nobjs;
> +       r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
> +       if (orig_nobjs != cache->nobjs)
> +               percpu_counter_add(&kvm_total_unused_mmu_pages,
> +                                  (cache->nobjs - orig_nobjs));
> +       spin_unlock(cache_lock);
> +       return r;
> +}
> +
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
>         int r;
> @@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>                                        1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>         if (r)
>                 return r;
> -       r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -                                      PT64_ROOT_MAX_LEVEL);
> +       r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> +                                     &vcpu->arch.mmu_shadow_page_cache_lock);
>         if (r)
>                 return r;
>         if (maybe_indirect) {
> @@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>                                           PT64_ROOT_MAX_LEVEL);
>  }
>
> +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> +                                    spinlock_t *cache_lock)
> +{
> +       int orig_nobjs;
> +
> +       spin_lock(cache_lock);
> +       orig_nobjs = cache->nobjs;
> +       kvm_mmu_free_memory_cache(cache);
> +       if (orig_nobjs)
> +               percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> +
> +       spin_unlock(cache_lock);
> +}
> +
>  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> -       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> +       mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> +                                &vcpu->arch.mmu_shadow_page_cache_lock);
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
>  }
>  #endif
>
> -/*
> - * This value is the sum of all of the kvm instances's
> - * kvm->arch.n_used_mmu_pages values.  We need a global,
> - * aggregate version in order to make the slab shrinker
> - * faster
> - */
> -static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
> -{
> -       kvm->arch.n_used_mmu_pages += nr;
> -       percpu_counter_add(&kvm_total_used_mmu_pages, nr);
> -}
> -
>  static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -       kvm_mod_used_mmu_pages(kvm, +1);
> +       kvm->arch.n_used_mmu_pages++;
>         kvm_account_pgtable_pages((void *)sp->spt, +1);
>  }
>
>  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -       kvm_mod_used_mmu_pages(kvm, -1);
> +       kvm->arch.n_used_mmu_pages--;
>         kvm_account_pgtable_pages((void *)sp->spt, -1);
>  }
>
> @@ -2150,8 +2172,31 @@ struct shadow_page_caches {
>         struct kvm_mmu_memory_cache *page_header_cache;
>         struct kvm_mmu_memory_cache *shadow_page_cache;
>         struct kvm_mmu_memory_cache *shadowed_info_cache;
> +       /*
> +        * Protects change in size of shadow_page_cache cache.
> +        */
> +       spinlock_t *shadow_page_cache_lock;
>  };
>
> +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> +                                   spinlock_t *cache_lock)
> +{
> +       int orig_nobjs;
> +       void *page;
> +
> +       if (!cache_lock) {
> +               spin_lock(cache_lock);
> +               orig_nobjs = shadow_page_cache->nobjs;
> +       }

I believe this is guaranteed to cause a null pointer dereference.

> +       page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
> +       if (!cache_lock) {
> +               if (orig_nobjs)
> +                       percpu_counter_dec(&kvm_total_unused_mmu_pages);
> +               spin_unlock(cache_lock);

Again, this will cause a null-pointer dereference. The check above
just needs to be inverted.

> +       }
> +       return page;
> +}
> +
>  static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
>                                                       struct shadow_page_caches *caches,
>                                                       gfn_t gfn,
> @@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
>         struct kvm_mmu_page *sp;
>
>         sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
> -       sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
> +       sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
> +                                               caches->shadow_page_cache_lock);
>         if (!role.direct)
>                 sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
>
> @@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
>                 .page_header_cache = &vcpu->arch.mmu_page_header_cache,
>                 .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
>                 .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> +               .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
>         };
>
>         return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> @@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>         vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>
>         vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +       spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
>
>         vcpu->arch.mmu = &vcpu->arch.root_mmu;
>         vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> @@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
>                 kvm_tdp_mmu_zap_invalidated_roots(kvm);
>  }
>
> -static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> -{
> -       return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> -}
> -
>  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>                         struct kvm_memory_slot *slot,
>                         struct kvm_page_track_notifier_node *node)
> @@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
>         /* Direct SPs do not require a shadowed_info_cache. */
>         caches.page_header_cache = &kvm->arch.split_page_header_cache;
>         caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> +       caches.shadow_page_cache_lock = NULL;
>
>         /* Safe to pass NULL for vCPU since requesting a direct SP. */
>         return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
>  static unsigned long
>  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  {
> -       struct kvm *kvm;
> -       int nr_to_scan = sc->nr_to_scan;
> +       struct kvm_mmu_memory_cache *cache;
> +       struct kvm *kvm, *first_kvm = NULL;
>         unsigned long freed = 0;
> +       /* spinlock for memory cache */
> +       spinlock_t *cache_lock;
> +       struct kvm_vcpu *vcpu;
> +       unsigned long i;
>
>         mutex_lock(&kvm_lock);
>
>         list_for_each_entry(kvm, &vm_list, vm_list) {
> -               int idx;
> -               LIST_HEAD(invalid_list);
> -
> -               /*
> -                * Never scan more than sc->nr_to_scan VM instances.
> -                * Will not hit this condition practically since we do not try
> -                * to shrink more than one VM and it is very unlikely to see
> -                * !n_used_mmu_pages so many times.
> -                */
> -               if (!nr_to_scan--)
> +               if (first_kvm == kvm)
>                         break;
> -               /*
> -                * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> -                * here. We may skip a VM instance errorneosly, but we do not
> -                * want to shrink a VM that only started to populate its MMU
> -                * anyway.
> -                */
> -               if (!kvm->arch.n_used_mmu_pages &&
> -                   !kvm_has_zapped_obsolete_pages(kvm))
> -                       continue;
> +               if (!first_kvm)
> +                       first_kvm = kvm;
> +               list_move_tail(&kvm->vm_list, &vm_list);
>
> -               idx = srcu_read_lock(&kvm->srcu);

I think we still want to do the SRCU read lock here to prevent
use-after-free on the vCPUs.

> -               write_lock(&kvm->mmu_lock);
> +               kvm_for_each_vcpu(i, vcpu, kvm) {
> +                       cache = &vcpu->arch.mmu_shadow_page_cache;
> +                       cache_lock = vcpu->arch.mmu_shadow_page_cache_lock;
> +                       if (READ_ONCE(cache->nobjs)) {
> +                               spin_lock(cache_lock);
> +                               freed += kvm_mmu_empty_memory_cache(cache);

Would it make sense to just have kvm_mmu_empty_memory_cache()
decrement the per-cpu counter itself? I don't think there's much perf
to be gained by reducing percpu counter updates here and it would
consolidate the bookkeeping.

> +                               spin_unlock(cache_lock);
> +                       }
>
> -               if (kvm_has_zapped_obsolete_pages(kvm)) {
> -                       kvm_mmu_commit_zap_page(kvm,
> -                             &kvm->arch.zapped_obsolete_pages);
> -                       goto unlock;
>                 }
>
> -               freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
> -
> -unlock:
> -               write_unlock(&kvm->mmu_lock);
> -               srcu_read_unlock(&kvm->srcu, idx);
> -
> -               /*
> -                * unfair on small ones
> -                * per-vm shrinkers cry out
> -                * sadness comes quickly
> -                */

Nooooo, don't delete the beautiful poem!

> -               list_move_tail(&kvm->vm_list, &vm_list);
> -               break;
> +               if (freed >= sc->nr_to_scan)
> +                       break;
>         }
>
> +       if (freed)
> +               percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
>         mutex_unlock(&kvm_lock);
> +       percpu_counter_sync(&kvm_total_unused_mmu_pages);
>         return freed;
>  }
>
>  static unsigned long
>  mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
>  {
> -       return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> +       return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);

This will return 0 if the sum of all the per-cpu counters is negative.
It should never be negative though. Might be nice to add a warning if
we would get a negative sum.

>  }
>
>  static struct shrinker mmu_shrinker = {
> @@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
>         if (!mmu_page_header_cache)
>                 goto out;
>
> -       if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> +       if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
>                 goto out;
>
>         ret = register_shrinker(&mmu_shrinker, "x86-mmu");
> @@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
>         return 0;
>
>  out_shrinker:
> -       percpu_counter_destroy(&kvm_total_used_mmu_pages);
> +       percpu_counter_destroy(&kvm_total_unused_mmu_pages);
>  out:
>         mmu_destroy_caches();
>         return ret;
> @@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>  void kvm_mmu_vendor_module_exit(void)
>  {
>         mmu_destroy_caches();
> -       percpu_counter_destroy(&kvm_total_used_mmu_pages);
> +       percpu_counter_destroy(&kvm_total_unused_mmu_pages);
>         unregister_shrinker(&mmu_shrinker);
>  }
>
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index ac00bfbf32f6..c2a342028b6a 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>  void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>  void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>
> +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> +                                   spinlock_t *cache_lock);
>  #endif /* __KVM_X86_MMU_INTERNAL_H */
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 764f7c87286f..4974fa96deff 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
>         struct kvm_mmu_page *sp;
>
>         sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> -       sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> +       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> +                                               &vcpu->arch.mmu_shadow_page_cache_lock);
>
>         return sp;
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 01aad8b74162..efd9b38ea9a2 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
>  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
>  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
> +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>  #endif
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 13e88297f999..f2d762878b97 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
>         return mc->nobjs;
>  }
>
> -void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
>  {
> +       int freed = mc->nobjs;
> +
>         while (mc->nobjs) {
>                 if (mc->kmem_cache)
>                         kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
> @@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
>                         free_page((unsigned long)mc->objects[--mc->nobjs]);
>         }
>
> -       kvfree(mc->objects);
> +       return freed;
> +}
>
> +void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> +{
> +       kvm_mmu_empty_memory_cache(mc);
> +       kvfree(mc->objects);
>         mc->objects = NULL;
>         mc->capacity = 0;
>  }
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split
  2022-12-22  2:34 ` [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split Vipin Sharma
@ 2022-12-27 19:02   ` Ben Gardon
  2022-12-28 22:07     ` Vipin Sharma
  2022-12-29 22:30   ` David Matlack
  1 sibling, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-27 19:02 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> When dirty log is enabled, huge pages are split. Page table's pages

Nit: Suggest "When huge pages are split for dirty log" since this can
happen at various points during dirty logging.
Same below.

> during the split are allocated based on the current thread NUMA node or
> mempolicy. This causes inefficient page table accesses if underlying
> page is on a different NUMA node
>
> Allocate page table's pages on the same NUMA node as the underlying huge
> page when dirty log is enabled and huge pages are split.
>
> The performance gain during the pre-copy phase of live migrations of a
> 416 vCPUs and 11 TiB memory VM  on a 8 node host was seen in the range
> of 130% to 150%.
>
> Suggested-by: David Matlack <dmatlack@google.com>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++++----
>  include/linux/kvm_host.h   | 18 ++++++++++++++++++
>  2 files changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 4974fa96deff..376b8dceb3f9 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1403,7 +1403,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
>         return spte_set;
>  }
>
> -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(int nid, gfp_t gfp)
>  {
>         struct kvm_mmu_page *sp;
>
> @@ -1413,7 +1413,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
>         if (!sp)
>                 return NULL;
>
> -       sp->spt = (void *)__get_free_page(gfp);
> +       sp->spt = kvm_mmu_get_free_page(nid, gfp);
> +

Just so that kvm_mmu_get_free_page isn't dead code in the previous
commit, I'd do this refactor there and just pass NUMA_NO_NODE here.

>         if (!sp->spt) {
>                 kmem_cache_free(mmu_page_header_cache, sp);
>                 return NULL;
> @@ -1427,6 +1428,9 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>                                                        bool shared)
>  {
>         struct kvm_mmu_page *sp;
> +       int nid;
> +
> +       nid = kvm_pfn_to_page_table_nid(spte_to_pfn(iter->old_spte));
>
>         /*
>          * Since we are allocating while under the MMU lock we have to be
> @@ -1437,7 +1441,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>          * If this allocation fails we drop the lock and retry with reclaim
>          * allowed.
>          */
> -       sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> +       sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_NOWAIT | __GFP_ACCOUNT);
>         if (sp)
>                 return sp;
>
> @@ -1449,7 +1453,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>                 write_unlock(&kvm->mmu_lock);
>
>         iter->yielded = true;
> -       sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> +       sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_KERNEL_ACCOUNT);
>
>         if (shared)
>                 read_lock(&kvm->mmu_lock);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index d48064503b88..a262e15ebd19 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1583,6 +1583,24 @@ void kvm_arch_sync_events(struct kvm *kvm);
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
>
>  struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn);
> +
> +/*
> + * Tells the appropriate NUMA node location of the page table's page based on
> + * pfn it will point to.
> + *
> + * Return the nid of the page if pfn is valid and backed by a refcounted page,
> + * otherwise, return the nearest memory node for the current CPU.

Nit: Should this be "current thread"?

> + */
> +static inline int kvm_pfn_to_page_table_nid(kvm_pfn_t pfn)

This could just be kvm_pfn_nid (or even better kvm_pfn_node_id) since
this really has nothing to do with page tables. We just want to know
which NUMA node backs the given PFN.

> +{
> +       struct page *page = kvm_pfn_to_refcounted_page(pfn);
> +
> +       if (page)
> +               return page_to_nid(page);
> +       else
> +               return numa_mem_id();
> +}
> +
>  bool kvm_is_zone_device_page(struct page *page);
>
>  struct kvm_irq_ack_notifier {
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-22  2:34 ` [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{} Vipin Sharma
@ 2022-12-27 19:09   ` Ben Gardon
  2022-12-28 22:07     ` Vipin Sharma
  2022-12-29 23:08   ` David Matlack
  1 sibling, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-27 19:09 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> this cache should allocate memory from. Default initialize to
> NUMA_NO_NODE in all architectures.
>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/arm64/kvm/arm.c      |  2 +-
>  arch/arm64/kvm/mmu.c      |  4 +++-
>  arch/mips/kvm/mips.c      |  2 ++
>  arch/riscv/kvm/mmu.c      |  2 +-
>  arch/riscv/kvm/vcpu.c     |  2 +-
>  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
>  include/linux/kvm_host.h  |  6 ++++++
>  include/linux/kvm_types.h |  2 ++
>  8 files changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 9c5573bc4614..52a41f4532e2 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>         vcpu->arch.target = -1;
>         bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
>
> -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
>
>         /*
>          * Default value for the FP state, will be overloaded at load
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 31d7fa4c7c14..bd07155e17fa 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>  {
>         phys_addr_t addr;
>         int ret = 0;
> -       struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> +       struct kvm_mmu_memory_cache cache;
>         struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>         enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
>                                      KVM_PGTABLE_PROT_R |
>                                      (writable ? KVM_PGTABLE_PROT_W : 0);
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
> +
>         if (is_protected_kvm_enabled())
>                 return -EPERM;
>
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index a25e0b73ee70..b017c29a9340 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>                      HRTIMER_MODE_REL);
>         vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
>
> +       vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> +

It looks weird to have MIPS not using the initialization MACRO. Should
it just have a GFP_ZERO parameter?

>         /*
>          * Allocate space for host mode exception handlers that handle
>          * guest mode exits
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 34b57e0be2ef..119de4520cc6 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
>         phys_addr_t addr, end;
>         struct kvm_mmu_memory_cache pcache = {
>                 .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> -               .gfp_zero = __GFP_ZERO,
>         };
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
>         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
>         pfn = __phys_to_pfn(hpa);
>
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 7c08567097f0..189b14feb365 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>
>         /* Mark this VCPU never ran */
>         vcpu->arch.ran_atleast_once = false;
> -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
>         bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
>
>         /* Setup ISA features available to VCPU */
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 6f6a10d7a871..23a3b82b2384 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  {
>         int ret;
>
> -       vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> -       vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> +                                 pte_list_desc_cache, NUMA_NO_NODE);
>
> -       vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> -       vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> +                                 mmu_page_header_cache, NUMA_NO_NODE);
>
> -       vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> +                                 NULL, NUMA_NO_NODE);
>         spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
>
>         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> @@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>         node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
>         kvm_page_track_register_notifier(kvm, node);
>
> -       kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> -       kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> +                                 mmu_page_header_cache, NUMA_NO_NODE);
>
> -       kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> +                                 NULL, NUMA_NO_NODE);
>         spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
>
> -       kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> -       kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> +                                 pte_list_desc_cache, NUMA_NO_NODE);
>
>         return 0;
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index a262e15ebd19..719687a37ef7 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
>  /* Max number of entries allowed for each kvm dirty ring */
>  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
>
> +#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({       \
> +       (_cache)->kmem_cache = _kmem_cache;                             \
> +       (_cache)->gfp_zero = __GFP_ZERO;                                \
> +       (_cache)->node = _node;                                         \
> +})
> +

Given that this initialization is probably not happening in a super
hot path, is there any downside to just using a function for the
initialization?

>  #endif
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index 76de36e56cdf..9c70ce95e51f 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
>         struct kmem_cache *kmem_cache;
>         int capacity;
>         void **objects;
> +       /* Node on which memory should be allocated by default */
> +       int node;
>  };
>  #endif
>
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages
  2022-12-22  2:34 ` [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages Vipin Sharma
@ 2022-12-27 19:34   ` Ben Gardon
  2022-12-28 22:08     ` Vipin Sharma
  0 siblings, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-27 19:34 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> Page table pages of a VM are currently allocated based on the current
> task's NUMA node or its mempolicy. This can cause suboptimal remote
> accesses by the vCPU if it is accessing physical pages local to its NUMA
> node but the page table pages mapping those physcal pages were created
> by some other vCPU which was on different NUMA node or had different
> policy.
>
> Allocate page table pages on the same NUMA node where underlying
> physical page exists. Page table at level 5, 4, and 3 might not end up
> on the same NUMA node as they can span multiple NUMA nodes.

A page table at any level could map memory spanning multiple NUMA
nodes, it just becomes more likely at higher levels.
We're only guaranteed that a page table maps memory all on the same
node if it's a split hugepage.
This change can only guarantee that the page table pages are allocated
on the same node as at least some of the memory they map.
Of course in practice, the above is absolutely correct since we'd
expect to have multi-GB continuous ranges of GFNs allocated on the
same node via huge pages.

And since the root pages are allocated based only on where the thread
allocating them is running, they're not actually guaranteed to be on
the same node as any of the memory they map. (Though they probably
will be.)

>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 +-
>  arch/x86/kvm/mmu/mmu.c          | 63 ++++++++++++++++++++++-----------
>  arch/x86/kvm/mmu/paging_tmpl.h  |  4 +--
>  arch/x86/kvm/mmu/tdp_mmu.c      | 11 +++---
>  virt/kvm/kvm_main.c             |  2 +-
>  5 files changed, 53 insertions(+), 29 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 293994fabae3..b1f319ad6f89 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -782,7 +782,7 @@ struct kvm_vcpu_arch {
>         struct kvm_mmu *walk_mmu;
>
>         struct kvm_mmu_memory_cache mmu_pte_list_desc_cache;
> -       struct kvm_mmu_memory_cache mmu_shadow_page_cache;
> +       struct kvm_mmu_memory_cache mmu_shadow_page_cache[MAX_NUMNODES];
>         struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
>         struct kvm_mmu_memory_cache mmu_page_header_cache;
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 23a3b82b2384..511c6ef265ee 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -677,24 +677,29 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
>
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
> -       int r;
> +       int r, nid;
>
>         /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
>         r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
>                                        1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>         if (r)
>                 return r;
> -       r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -                                     &vcpu->arch.mmu_shadow_page_cache_lock,
> -                                     PT64_ROOT_MAX_LEVEL);
> -       if (r)
> -               return r;
> +
> +       for_each_online_node(nid) {
> +               r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
> +                                             &vcpu->arch.mmu_shadow_page_cache_lock,
> +                                             PT64_ROOT_MAX_LEVEL);
> +               if (r)
> +                       return r;
> +       }
> +
>         if (maybe_indirect) {
>                 r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_info_cache,
>                                                PT64_ROOT_MAX_LEVEL);
>                 if (r)
>                         return r;
>         }
> +
>         return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache,
>                                           PT64_ROOT_MAX_LEVEL);
>  }
> @@ -715,9 +720,14 @@ static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
>
>  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
> +       int nid;
> +
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> -       mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -                                &vcpu->arch.mmu_shadow_page_cache_lock);
> +
> +       for_each_node(nid)
> +               mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
> +                                        &vcpu->arch.mmu_shadow_page_cache_lock);
> +

Was just trying to think if there could be any issue with memory
leakage if the online nodes changed, though IDK if any hardware does
that.
Still, it might be more robust to use ARRAY_SIZE and cover the whole array.

>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -2256,11 +2266,12 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
>
>  static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
>                                                     gfn_t gfn,
> -                                                   union kvm_mmu_page_role role)
> +                                                   union kvm_mmu_page_role role,
> +                                                   int nid)
>  {
>         struct shadow_page_caches caches = {
>                 .page_header_cache = &vcpu->arch.mmu_page_header_cache,
> -               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
> +               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache[nid],
>                 .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
>                 .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
>         };
> @@ -2316,15 +2327,19 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,
>
>  static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu,
>                                                  u64 *sptep, gfn_t gfn,
> -                                                bool direct, unsigned int access)
> +                                                bool direct, unsigned int access,
> +                                                kvm_pfn_t pfn)
>  {
>         union kvm_mmu_page_role role;
> +       int nid;
>
>         if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
>                 return ERR_PTR(-EEXIST);
>
>         role = kvm_mmu_child_role(sptep, direct, access);
> -       return kvm_mmu_get_shadow_page(vcpu, gfn, role);
> +       nid = kvm_pfn_to_page_table_nid(pfn);
> +
> +       return kvm_mmu_get_shadow_page(vcpu, gfn, role, nid);
>  }
>
>  static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator,
> @@ -3208,7 +3223,8 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>                 if (it.level == fault->goal_level)
>                         break;
>
> -               sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL);
> +               sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true,
> +                                         ACC_ALL, fault->pfn);
>                 if (sp == ERR_PTR(-EEXIST))
>                         continue;
>
> @@ -3636,7 +3652,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant,
>         WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte);
>         WARN_ON_ONCE(role.direct && role.has_4_byte_gpte);
>
> -       sp = kvm_mmu_get_shadow_page(vcpu, gfn, role);
> +       sp = kvm_mmu_get_shadow_page(vcpu, gfn, role, numa_mem_id());
>         ++sp->root_count;
>
>         return __pa(sp->spt);
> @@ -5952,7 +5968,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
>
>  int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  {
> -       int ret;
> +       int ret, nid;
>
>         INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
>                                   pte_list_desc_cache, NUMA_NO_NODE);
> @@ -5960,8 +5976,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>         INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
>                                   mmu_page_header_cache, NUMA_NO_NODE);
>
> -       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> -                                 NULL, NUMA_NO_NODE);
> +       for_each_node(nid)
> +               INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache[nid],
> +                                         NULL, nid);
>         spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
>
>         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> @@ -6692,13 +6709,17 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
>  }
>
>  static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
> +                                     int cache_count,
>                                       spinlock_t *cache_lock)
>  {
>         unsigned long freed = 0;
> +       int nid;
>
>         spin_lock(cache_lock);
> -       if (cache->nobjs)
> -               freed = kvm_mmu_empty_memory_cache(cache);
> +       for (nid = 0; nid < cache_count; nid++) {
> +               if (node_online(nid) && cache[nid].nobjs)

Is there any reason to keep the cache if !node_online(nid)?
Actually, I'd also just drop the cache_count argument and always
iterate over the entire array, only checking nobjs. There's no
guarantee I'm aware of that the set of nodes has a sequential series
of IDs starting at 0 and you'd get a bug if that wasn't the case since
it only iterates to  nid < cache_count here but some of the earlier
nids might not have been online.

> +                       freed += kvm_mmu_empty_memory_cache(&cache[nid]);
> +       }
>         spin_unlock(cache_lock);
>         return freed;
>  }
> @@ -6721,13 +6742,15 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>                 list_move_tail(&kvm->vm_list, &vm_list);
>
>                 freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
> +                                         1,

So lonely.
One.
All by itself,
with only a coma for company.

NIT: This could be merged to the previous or subsequent lines.

>                                           &kvm->arch.split_shadow_page_cache_lock);
>
>                 if (freed >= sc->nr_to_scan)
>                         break;
>
>                 kvm_for_each_vcpu(i, vcpu, kvm) {
> -                       freed += mmu_shrink_cache(&vcpu->arch.mmu_shadow_page_cache,
> +                       freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
> +                                                 MAX_NUMNODES,
>                                                   &vcpu->arch.mmu_shadow_page_cache_lock);
>                 }
>
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index e5662dbd519c..1ceca62ec4cf 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -652,7 +652,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
>                 table_gfn = gw->table_gfn[it.level - 2];
>                 access = gw->pt_access[it.level - 2];
>                 sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn,
> -                                         false, access);
> +                                         false, access, fault->pfn);
>
>                 if (sp != ERR_PTR(-EEXIST)) {
>                         /*
> @@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
>                 validate_direct_spte(vcpu, it.sptep, direct_access);
>
>                 sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn,
> -                                         true, direct_access);
> +                                         true, direct_access, fault->pfn);
>                 if (sp == ERR_PTR(-EEXIST))
>                         continue;
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 376b8dceb3f9..b5abae2366dd 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -259,12 +259,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
>                     kvm_mmu_page_as_id(_root) != _as_id) {              \
>                 } else
>
> -static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> +static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu, int nid)
>  {
>         struct kvm_mmu_page *sp;
>
>         sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> -       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> +       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache[nid],
>                                                 &vcpu->arch.mmu_shadow_page_cache_lock);
>
>         return sp;
> @@ -317,7 +317,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
>                         goto out;
>         }
>
> -       root = tdp_mmu_alloc_sp(vcpu);
> +       root = tdp_mmu_alloc_sp(vcpu, numa_mem_id());

Might be worth calling out somewhere that the root page is just
allocated based on where the thread allocating it runs.

>         tdp_mmu_init_sp(root, NULL, 0, role);
>
>         refcount_set(&root->tdp_mmu_root_count, 1);
> @@ -1149,7 +1149,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>         struct kvm *kvm = vcpu->kvm;
>         struct tdp_iter iter;
>         struct kvm_mmu_page *sp;
> -       int ret = RET_PF_RETRY;
> +       int ret = RET_PF_RETRY, nid;
>
>         kvm_mmu_hugepage_adjust(vcpu, fault);
>
> @@ -1178,11 +1178,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>                     !is_large_pte(iter.old_spte))
>                         continue;
>
> +               nid = kvm_pfn_to_page_table_nid(fault->pfn);
>                 /*
>                  * The SPTE is either non-present or points to a huge page that
>                  * needs to be split.
>                  */
> -               sp = tdp_mmu_alloc_sp(vcpu);
> +               sp = tdp_mmu_alloc_sp(vcpu, nid);
>                 tdp_mmu_init_child_sp(sp, &iter);
>
>                 sp->nx_huge_page_disallowed = fault->huge_page_disallowed;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d96c8146e9ba..4f3db7ffeba8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -415,7 +415,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
>         if (mc->kmem_cache)
>                 return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
>         else
> -               return (void *)__get_free_page(gfp_flags);
> +               return kvm_mmu_get_free_page(mc->node, gfp_flags);

You could do part of this change in the commit that introduced
kvm_mmu_get_free_page too.
>  }
>
>  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware
  2022-12-22  2:34 ` [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware Vipin Sharma
@ 2022-12-27 19:42   ` Ben Gardon
  2022-12-28 22:08     ` Vipin Sharma
  2022-12-29 23:18   ` David Matlack
  1 sibling, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-27 19:42 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> Make split_shadow_page_cache NUMA aware and allocate page table's pages
> during the split based on the underlying physical page's NUMA node.
>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 +-
>  arch/x86/kvm/mmu/mmu.c          | 50 ++++++++++++++++++---------------
>  2 files changed, 29 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index b1f319ad6f89..7b3f36ae37a4 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1410,7 +1410,7 @@ struct kvm_arch {
>          *
>          * Protected by kvm->slots_lock.
>          */
> -       struct kvm_mmu_memory_cache split_shadow_page_cache;
> +       struct kvm_mmu_memory_cache split_shadow_page_cache[MAX_NUMNODES];
>         struct kvm_mmu_memory_cache split_page_header_cache;
>
>         /*
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 511c6ef265ee..7454bfc49a51 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6126,7 +6126,7 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>  int kvm_mmu_init_vm(struct kvm *kvm)
>  {
>         struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
> -       int r;
> +       int r, nid;
>
>         INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
>         INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
> @@ -6145,8 +6145,9 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>         INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
>                                   mmu_page_header_cache, NUMA_NO_NODE);
>
> -       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> -                                 NULL, NUMA_NO_NODE);
> +       for_each_node(nid)

Again, assuming no one sets CONFIG_NODE_SHIFT to a ridiculous value,
it would probably be fine to initialize the entire array here since
that doesn't take any extra memory and we're not in a super hot path.

> +               INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache[nid],
> +                                         NULL, NUMA_NO_NODE);
>         spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
>
>         INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> @@ -6157,10 +6158,13 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>
>  static void mmu_free_vm_memory_caches(struct kvm *kvm)
>  {
> +       int nid;
> +
>         kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache);
>         kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache);
> -       mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
> -                                &kvm->arch.split_shadow_page_cache_lock);
> +       for_each_node(nid)

Again, could just iterate over the whole array here.

> +               mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid],
> +                                        &kvm->arch.split_shadow_page_cache_lock);
>  }
>
>  void kvm_mmu_uninit_vm(struct kvm *kvm)
> @@ -6269,7 +6273,7 @@ static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min)
>         return kvm_mmu_memory_cache_nr_free_objects(cache) < min;
>  }
>
> -static bool need_topup_split_caches_or_resched(struct kvm *kvm)
> +static bool need_topup_split_caches_or_resched(struct kvm *kvm, int nid)
>  {
>         if (need_resched() || rwlock_needbreak(&kvm->mmu_lock))
>                 return true;
> @@ -6281,10 +6285,10 @@ static bool need_topup_split_caches_or_resched(struct kvm *kvm)
>          */
>         return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_MIN_NR_OBJECTS) ||
>                need_topup(&kvm->arch.split_page_header_cache, 1) ||
> -              need_topup(&kvm->arch.split_shadow_page_cache, 1);
> +              need_topup(&kvm->arch.split_shadow_page_cache[nid], 1);
>  }
>
> -static int topup_split_caches(struct kvm *kvm)
> +static int topup_split_caches(struct kvm *kvm, int nid)
>  {
>         /*
>          * Allocating rmap list entries when splitting huge pages for nested
> @@ -6314,18 +6318,21 @@ static int topup_split_caches(struct kvm *kvm)
>         if (r)
>                 return r;
>
> -       return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
> +       return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid],
>                                          &kvm->arch.split_shadow_page_cache_lock,
>                                          1);
>  }
>
> -static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep)
> +static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm,
> +                                                       u64 *huge_sptep,
> +                                                       u64 huge_spte)

These can go on the same line.

>  {
>         struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
>         struct shadow_page_caches caches = {};
>         union kvm_mmu_page_role role;
>         unsigned int access;
>         gfn_t gfn;
> +       int nid;
>
>         gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
>         access = kvm_mmu_page_get_access(huge_sp, spte_index(huge_sptep));
> @@ -6338,9 +6345,11 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
>          */
>         role = kvm_mmu_child_role(huge_sptep, /*direct=*/true, access);
>
> +       nid = kvm_pfn_to_page_table_nid(spte_to_pfn(huge_spte));
> +
>         /* Direct SPs do not require a shadowed_info_cache. */
>         caches.page_header_cache = &kvm->arch.split_page_header_cache;
> -       caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> +       caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache[nid];
>         caches.shadow_page_cache_lock = &kvm->arch.split_shadow_page_cache_lock;
>
>         /* Safe to pass NULL for vCPU since requesting a direct SP. */
> @@ -6360,7 +6369,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm,
>         gfn_t gfn;
>         int index;
>
> -       sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep);
> +       sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep, huge_spte);
>
>         for (index = 0; index < SPTE_ENT_PER_PAGE; index++) {
>                 sptep = &sp->spt[index];
> @@ -6398,7 +6407,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
>                                           u64 *huge_sptep)
>  {
>         struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
> -       int level, r = 0;
> +       int level, r = 0, nid;
>         gfn_t gfn;
>         u64 spte;
>
> @@ -6406,13 +6415,14 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
>         gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
>         level = huge_sp->role.level;
>         spte = *huge_sptep;
> +       nid = kvm_pfn_to_page_table_nid(spte_to_pfn(spte));
>
>         if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) {
>                 r = -ENOSPC;
>                 goto out;
>         }
>
> -       if (need_topup_split_caches_or_resched(kvm)) {
> +       if (need_topup_split_caches_or_resched(kvm, nid)) {
>                 write_unlock(&kvm->mmu_lock);
>                 cond_resched();
>                 /*
> @@ -6420,7 +6430,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
>                  * rmap iterator should be restarted because the MMU lock was
>                  * dropped.
>                  */
> -               r = topup_split_caches(kvm) ?: -EAGAIN;
> +               r = topup_split_caches(kvm, nid) ?: -EAGAIN;
>                 write_lock(&kvm->mmu_lock);
>                 goto out;
>         }
> @@ -6709,17 +6719,15 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
>  }
>
>  static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
> -                                     int cache_count,
>                                       spinlock_t *cache_lock)
>  {
>         unsigned long freed = 0;
>         int nid;
>
>         spin_lock(cache_lock);
> -       for (nid = 0; nid < cache_count; nid++) {
> -               if (node_online(nid) && cache[nid].nobjs)
> +       for_each_online_node(nid)
> +               if (cache[nid].nobjs)
>                         freed += kvm_mmu_empty_memory_cache(&cache[nid]);
> -       }
>         spin_unlock(cache_lock);
>         return freed;
>  }
> @@ -6741,8 +6749,7 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>                         first_kvm = kvm;
>                 list_move_tail(&kvm->vm_list, &vm_list);
>
> -               freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
> -                                         1,
> +               freed += mmu_shrink_cache(kvm->arch.split_shadow_page_cache,
>                                           &kvm->arch.split_shadow_page_cache_lock);
>
>                 if (freed >= sc->nr_to_scan)
> @@ -6750,7 +6757,6 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>
>                 kvm_for_each_vcpu(i, vcpu, kvm) {
>                         freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
> -                                                 MAX_NUMNODES,
>                                                   &vcpu->arch.mmu_shadow_page_cache_lock);
>                 }
>
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL
  2022-12-22  2:34 ` [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL Vipin Sharma
@ 2022-12-27 19:52   ` Ben Gardon
  2022-12-28 22:08     ` Vipin Sharma
  0 siblings, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-27 19:52 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE is set to 40 without any specific
> reason. Reduce default size to PT64_ROOT_MAX_LEVEL, which is currently
> 5.
>
> Change mmu_pte_list_desc_cache size to what is needed as it is more than
> 5 but way less than 40.

Why do you say more than 5? At least to resolve a page fault we'll
never need more than 4 pages on a system with 5 level paging since the
root is already allocated.

>
> Tested by running dirty_log_perf_test on both tdp and shadow MMU with 48
> vcpu and 2GB/vcpu size on a 2 NUMA node machine. No impact on
> performance noticed.
>
> Ran perf on dirty_log_perf_test and found kvm_mmu_get_free_page() calls
> reduced by ~3300 which is near to 48 (vcpus) * 2 (nodes) * 35 (cache
> size).
>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_types.h | 2 +-
>  arch/x86/kvm/mmu/mmu.c           | 7 ++++---
>  2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_types.h b/arch/x86/include/asm/kvm_types.h
> index 08f1b57d3b62..752dab218a62 100644
> --- a/arch/x86/include/asm/kvm_types.h
> +++ b/arch/x86/include/asm/kvm_types.h
> @@ -2,6 +2,6 @@
>  #ifndef _ASM_X86_KVM_TYPES_H
>  #define _ASM_X86_KVM_TYPES_H
>
> -#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40
> +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE PT64_ROOT_MAX_LEVEL

Please add a comment explaining why this value was chosen.

>
>  #endif /* _ASM_X86_KVM_TYPES_H */
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 7454bfc49a51..f89d933ff380 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -677,11 +677,12 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
>
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
> -       int r, nid;
> +       int r, nid, desc_capacity;
>
>         /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
> -       r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> -                                      1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> +       desc_capacity = 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM;
> +       r = __kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> +                                        desc_capacity, desc_capacity);
>         if (r)
>                 return r;
>
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-27 18:37   ` Ben Gardon
@ 2022-12-28 22:07     ` Vipin Sharma
  2022-12-29 21:15       ` David Matlack
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-28 22:07 UTC (permalink / raw)
  To: Ben Gardon; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Tue, Dec 27, 2022 at 10:37 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > mmu_shrink_scan() is very disruptive to VMs. It picks the first
> > VM in the vm_list, zaps the oldest page which is most likely an upper
> > level SPTEs and most like to be reused. Prior to TDP MMU, this is even
> > more disruptive in nested VMs case, considering L1 SPTEs will be the
> > oldest even though most of the entries are for L2 SPTEs.
> >
> > As discussed in
> > https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
> > shrinker logic has not be very useful in actually keeping VMs performant
> > and reducing memory usage.
> >
> > Change mmu_shrink_scan() to free pages from the vCPU's shadow page
> > cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
> > VM's performance should not be affected.
> >
> > This also allows to change cache capacities without worrying too much
> > about high memory usage in cache.
> >
> > Tested this change by running dirty_log_perf_test while dropping cache
> > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> > logs from kvm_mmu_memory_cache_alloc(), which is expected.
>
> Oh, that's not a good thing. I don't think we want to be hitting those
> warnings. For one, kernel warnings should not be expected behavior,
> probably for many reasons, but at least because Syzbot will find it.
> In this particular case, we don't want to hit that because in that
> case we'll try to do a GFP_ATOMIC, which can fail, and if it fails,
> we'll BUG:
>
> void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
> {
>         void *p;
>
>         if (WARN_ON(!mc->nobjs))
>                 p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT);
>         else
>                 p = mc->objects[--mc->nobjs];
>         BUG_ON(!p);
>         return p;
> }
>
> Perhaps the risk of actually panicking is small, but it probably
> indicates that we need better error handling around failed allocations
> from the cache.
> Or, the slightly less elegant approach might be to just hold the cache
> lock around the cache topup and use of pages from the cache, but
> adding better error handling would probably be cleaner.

I was counting on the fact that shrinker will ideally run only in
extreme cases, i.e. host is running on low memory. So, this WARN_ON
will only be rarely used. I was not aware of Syzbot, it seems like it
will be a concern if it does this kind of testing.

I thought about keeping a mutex, taking it during topup and releasing
it after the whole operation is done but I stopped it as the duration
of holding mutex will be long and might block the memory shrinker
longer. I am not sure though, if this is a valid concern.

I can't think of a better error handling in this situation. I can
change logic to hold mutex if the above mutex hold duration concern
won't be an issue compared to the current WARN_ON() approach.

>
> >
> > Suggested-by: Sean Christopherson <seanjc@google.com>
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |   5 +
> >  arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
> >  arch/x86/kvm/mmu/mmu_internal.h |   2 +
> >  arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
> >  include/linux/kvm_host.h        |   1 +
> >  virt/kvm/kvm_main.c             |  11 ++-
> >  6 files changed, 114 insertions(+), 71 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index aa4eb8cfcd7e..89cc809e4a00 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
> >         struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
> >         struct kvm_mmu_memory_cache mmu_page_header_cache;
> >
> > +       /*
> > +        * Protects change in size of mmu_shadow_page_cache cache.
> > +        */
> > +       spinlock_t mmu_shadow_page_cache_lock;
> > +
> >         /*
> >          * QEMU userspace and the guest each have their own FPU state.
> >          * In vcpu_run, we switch between the user and guest FPU contexts.
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 254bc46234e0..157417e1cb6e 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
> >
> >  static struct kmem_cache *pte_list_desc_cache;
> >  struct kmem_cache *mmu_page_header_cache;
> > -static struct percpu_counter kvm_total_used_mmu_pages;
> > +/*
> > + * Total number of unused pages in MMU shadow page cache.
> > + */
> > +static struct percpu_counter kvm_total_unused_mmu_pages;
> >
> >  static void mmu_spte_set(u64 *sptep, u64 spte);
> >
> > @@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> >         }
> >  }
> >
> > +static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > +                                    spinlock_t *cache_lock)
> > +{
> > +       int orig_nobjs;
> > +       int r;
> > +
> > +       spin_lock(cache_lock);
> > +       orig_nobjs = cache->nobjs;
> > +       r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
> > +       if (orig_nobjs != cache->nobjs)
> > +               percpu_counter_add(&kvm_total_unused_mmu_pages,
> > +                                  (cache->nobjs - orig_nobjs));
> > +       spin_unlock(cache_lock);
> > +       return r;
> > +}
> > +
> >  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >  {
> >         int r;
> > @@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >                                        1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> >         if (r)
> >                 return r;
> > -       r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > -                                      PT64_ROOT_MAX_LEVEL);
> > +       r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > +                                     &vcpu->arch.mmu_shadow_page_cache_lock);
> >         if (r)
> >                 return r;
> >         if (maybe_indirect) {
> > @@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >                                           PT64_ROOT_MAX_LEVEL);
> >  }
> >
> > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > +                                    spinlock_t *cache_lock)
> > +{
> > +       int orig_nobjs;
> > +
> > +       spin_lock(cache_lock);
> > +       orig_nobjs = cache->nobjs;
> > +       kvm_mmu_free_memory_cache(cache);
> > +       if (orig_nobjs)
> > +               percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> > +
> > +       spin_unlock(cache_lock);
> > +}
> > +
> >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> > -       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > +       mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > +                                &vcpu->arch.mmu_shadow_page_cache_lock);
> >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
> >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> >  }
> > @@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
> >  }
> >  #endif
> >
> > -/*
> > - * This value is the sum of all of the kvm instances's
> > - * kvm->arch.n_used_mmu_pages values.  We need a global,
> > - * aggregate version in order to make the slab shrinker
> > - * faster
> > - */
> > -static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
> > -{
> > -       kvm->arch.n_used_mmu_pages += nr;
> > -       percpu_counter_add(&kvm_total_used_mmu_pages, nr);
> > -}
> > -
> >  static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> >  {
> > -       kvm_mod_used_mmu_pages(kvm, +1);
> > +       kvm->arch.n_used_mmu_pages++;
> >         kvm_account_pgtable_pages((void *)sp->spt, +1);
> >  }
> >
> >  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> >  {
> > -       kvm_mod_used_mmu_pages(kvm, -1);
> > +       kvm->arch.n_used_mmu_pages--;
> >         kvm_account_pgtable_pages((void *)sp->spt, -1);
> >  }
> >
> > @@ -2150,8 +2172,31 @@ struct shadow_page_caches {
> >         struct kvm_mmu_memory_cache *page_header_cache;
> >         struct kvm_mmu_memory_cache *shadow_page_cache;
> >         struct kvm_mmu_memory_cache *shadowed_info_cache;
> > +       /*
> > +        * Protects change in size of shadow_page_cache cache.
> > +        */
> > +       spinlock_t *shadow_page_cache_lock;
> >  };
> >
> > +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > +                                   spinlock_t *cache_lock)
> > +{
> > +       int orig_nobjs;
> > +       void *page;
> > +
> > +       if (!cache_lock) {
> > +               spin_lock(cache_lock);
> > +               orig_nobjs = shadow_page_cache->nobjs;
> > +       }
>
> I believe this is guaranteed to cause a null pointer dereference.
>
> > +       page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
> > +       if (!cache_lock) {
> > +               if (orig_nobjs)
> > +                       percpu_counter_dec(&kvm_total_unused_mmu_pages);
> > +               spin_unlock(cache_lock);
>
> Again, this will cause a null-pointer dereference. The check above
> just needs to be inverted.

Yes, I forgot to change it in the commit and one patch later in the
series removes this whole "if(!cache_lock)" condition so it skipped my
attention. Thanks for catching it.

>
> > +       }
> > +       return page;
> > +}
> > +
> >  static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
> >                                                       struct shadow_page_caches *caches,
> >                                                       gfn_t gfn,
> > @@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
> >         struct kvm_mmu_page *sp;
> >
> >         sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
> > -       sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
> > +       sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
> > +                                               caches->shadow_page_cache_lock);
> >         if (!role.direct)
> >                 sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
> >
> > @@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
> >                 .page_header_cache = &vcpu->arch.mmu_page_header_cache,
> >                 .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
> >                 .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> > +               .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
> >         };
> >
> >         return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> > @@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >         vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> >
> >         vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +       spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> >         vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> > @@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
> >                 kvm_tdp_mmu_zap_invalidated_roots(kvm);
> >  }
> >
> > -static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> > -{
> > -       return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> > -}
> > -
> >  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> >                         struct kvm_memory_slot *slot,
> >                         struct kvm_page_track_notifier_node *node)
> > @@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
> >         /* Direct SPs do not require a shadowed_info_cache. */
> >         caches.page_header_cache = &kvm->arch.split_page_header_cache;
> >         caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> > +       caches.shadow_page_cache_lock = NULL;
> >
> >         /* Safe to pass NULL for vCPU since requesting a direct SP. */
> >         return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> > @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> >  static unsigned long
> >  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  {
> > -       struct kvm *kvm;
> > -       int nr_to_scan = sc->nr_to_scan;
> > +       struct kvm_mmu_memory_cache *cache;
> > +       struct kvm *kvm, *first_kvm = NULL;
> >         unsigned long freed = 0;
> > +       /* spinlock for memory cache */
> > +       spinlock_t *cache_lock;
> > +       struct kvm_vcpu *vcpu;
> > +       unsigned long i;
> >
> >         mutex_lock(&kvm_lock);
> >
> >         list_for_each_entry(kvm, &vm_list, vm_list) {
> > -               int idx;
> > -               LIST_HEAD(invalid_list);
> > -
> > -               /*
> > -                * Never scan more than sc->nr_to_scan VM instances.
> > -                * Will not hit this condition practically since we do not try
> > -                * to shrink more than one VM and it is very unlikely to see
> > -                * !n_used_mmu_pages so many times.
> > -                */
> > -               if (!nr_to_scan--)
> > +               if (first_kvm == kvm)
> >                         break;
> > -               /*
> > -                * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> > -                * here. We may skip a VM instance errorneosly, but we do not
> > -                * want to shrink a VM that only started to populate its MMU
> > -                * anyway.
> > -                */
> > -               if (!kvm->arch.n_used_mmu_pages &&
> > -                   !kvm_has_zapped_obsolete_pages(kvm))
> > -                       continue;
> > +               if (!first_kvm)
> > +                       first_kvm = kvm;
> > +               list_move_tail(&kvm->vm_list, &vm_list);
> >
> > -               idx = srcu_read_lock(&kvm->srcu);
>
> I think we still want to do the SRCU read lock here to prevent
> use-after-free on the vCPUs.

Since I am in mutex_lock(&kvm_lock), a kvm will not be removed from
kvm->vm_list, this will block kvm_destroy_vm() moving further to
destroy vcpus via kvm_arch_destroy_vm() > kvm_destroy_vcpus(). Do we
still need the srcu_read_lock()? Also, kvm_for_each_vcpu() using
xa_for_each_range() which uses RCU for traversing the loop, won't
these two be sufficient to avoid needing srcu_read_lock() here?

>
> > -               write_lock(&kvm->mmu_lock);
> > +               kvm_for_each_vcpu(i, vcpu, kvm) {
> > +                       cache = &vcpu->arch.mmu_shadow_page_cache;
> > +                       cache_lock = vcpu->arch.mmu_shadow_page_cache_lock;
> > +                       if (READ_ONCE(cache->nobjs)) {
> > +                               spin_lock(cache_lock);
> > +                               freed += kvm_mmu_empty_memory_cache(cache);
>
> Would it make sense to just have kvm_mmu_empty_memory_cache()
> decrement the per-cpu counter itself? I don't think there's much perf
> to be gained by reducing percpu counter updates here and it would
> consolidate the bookkeeping.

kvm_mmu_empty_memory_cache() is also used by other caches for which
are not keeping the count.

>
> > +                               spin_unlock(cache_lock);
> > +                       }
> >
> > -               if (kvm_has_zapped_obsolete_pages(kvm)) {
> > -                       kvm_mmu_commit_zap_page(kvm,
> > -                             &kvm->arch.zapped_obsolete_pages);
> > -                       goto unlock;
> >                 }
> >
> > -               freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
> > -
> > -unlock:
> > -               write_unlock(&kvm->mmu_lock);
> > -               srcu_read_unlock(&kvm->srcu, idx);
> > -
> > -               /*
> > -                * unfair on small ones
> > -                * per-vm shrinkers cry out
> > -                * sadness comes quickly
> > -                */
>
> Nooooo, don't delete the beautiful poem!

I will fix this mistake in the next version, pardon my ignorance :)

>
> > -               list_move_tail(&kvm->vm_list, &vm_list);
> > -               break;
> > +               if (freed >= sc->nr_to_scan)
> > +                       break;
> >         }
> >
> > +       if (freed)
> > +               percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
> >         mutex_unlock(&kvm_lock);
> > +       percpu_counter_sync(&kvm_total_unused_mmu_pages);
> >         return freed;
> >  }
> >
> >  static unsigned long
> >  mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
> >  {
> > -       return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> > +       return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);
>
> This will return 0 if the sum of all the per-cpu counters is negative.
> It should never be negative though. Might be nice to add a warning if
> we would get a negative sum.
>

Sounds good.


> >  }
> >
> >  static struct shrinker mmu_shrinker = {
> > @@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
> >         if (!mmu_page_header_cache)
> >                 goto out;
> >
> > -       if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> > +       if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
> >                 goto out;
> >
> >         ret = register_shrinker(&mmu_shrinker, "x86-mmu");
> > @@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
> >         return 0;
> >
> >  out_shrinker:
> > -       percpu_counter_destroy(&kvm_total_used_mmu_pages);
> > +       percpu_counter_destroy(&kvm_total_unused_mmu_pages);
> >  out:
> >         mmu_destroy_caches();
> >         return ret;
> > @@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
> >  void kvm_mmu_vendor_module_exit(void)
> >  {
> >         mmu_destroy_caches();
> > -       percpu_counter_destroy(&kvm_total_used_mmu_pages);
> > +       percpu_counter_destroy(&kvm_total_unused_mmu_pages);
> >         unregister_shrinker(&mmu_shrinker);
> >  }
> >
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index ac00bfbf32f6..c2a342028b6a 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> >  void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> >  void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> >
> > +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > +                                   spinlock_t *cache_lock);
> >  #endif /* __KVM_X86_MMU_INTERNAL_H */
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 764f7c87286f..4974fa96deff 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> >         struct kvm_mmu_page *sp;
> >
> >         sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> > -       sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> > +       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> > +                                               &vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >         return sp;
> >  }
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 01aad8b74162..efd9b38ea9a2 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
> >  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
> >  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
> >  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
> > +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
> >  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
> >  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> >  #endif
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 13e88297f999..f2d762878b97 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
> >         return mc->nobjs;
> >  }
> >
> > -void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
> >  {
> > +       int freed = mc->nobjs;
> > +
> >         while (mc->nobjs) {
> >                 if (mc->kmem_cache)
> >                         kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
> > @@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> >                         free_page((unsigned long)mc->objects[--mc->nobjs]);
> >         }
> >
> > -       kvfree(mc->objects);
> > +       return freed;
> > +}
> >
> > +void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > +{
> > +       kvm_mmu_empty_memory_cache(mc);
> > +       kvfree(mc->objects);
> >         mc->objects = NULL;
> >         mc->capacity = 0;
> >  }
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split
  2022-12-27 19:02   ` Ben Gardon
@ 2022-12-28 22:07     ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-28 22:07 UTC (permalink / raw)
  To: Ben Gardon; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Tue, Dec 27, 2022 at 11:02 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > When dirty log is enabled, huge pages are split. Page table's pages
>
> Nit: Suggest "When huge pages are split for dirty log" since this can
> happen at various points during dirty logging.
> Same below.
>

Yeah, this should be updated.

> > during the split are allocated based on the current thread NUMA node or
> > mempolicy. This causes inefficient page table accesses if underlying
> > page is on a different NUMA node
> >
> > Allocate page table's pages on the same NUMA node as the underlying huge
> > page when dirty log is enabled and huge pages are split.
> >
> > The performance gain during the pre-copy phase of live migrations of a
> > 416 vCPUs and 11 TiB memory VM  on a 8 node host was seen in the range
> > of 130% to 150%.
> >
> > Suggested-by: David Matlack <dmatlack@google.com>
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++++----
> >  include/linux/kvm_host.h   | 18 ++++++++++++++++++
> >  2 files changed, 26 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 4974fa96deff..376b8dceb3f9 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -1403,7 +1403,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
> >         return spte_set;
> >  }
> >
> > -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> > +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(int nid, gfp_t gfp)
> >  {
> >         struct kvm_mmu_page *sp;
> >
> > @@ -1413,7 +1413,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> >         if (!sp)
> >                 return NULL;
> >
> > -       sp->spt = (void *)__get_free_page(gfp);
> > +       sp->spt = kvm_mmu_get_free_page(nid, gfp);
> > +
>
> Just so that kvm_mmu_get_free_page isn't dead code in the previous
> commit, I'd do this refactor there and just pass NUMA_NO_NODE here.
>

Agreed.

> >         if (!sp->spt) {
> >                 kmem_cache_free(mmu_page_header_cache, sp);
> >                 return NULL;
> > @@ -1427,6 +1428,9 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >                                                        bool shared)
> >  {
> >         struct kvm_mmu_page *sp;
> > +       int nid;
> > +
> > +       nid = kvm_pfn_to_page_table_nid(spte_to_pfn(iter->old_spte));
> >
> >         /*
> >          * Since we are allocating while under the MMU lock we have to be
> > @@ -1437,7 +1441,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >          * If this allocation fails we drop the lock and retry with reclaim
> >          * allowed.
> >          */
> > -       sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> > +       sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_NOWAIT | __GFP_ACCOUNT);
> >         if (sp)
> >                 return sp;
> >
> > @@ -1449,7 +1453,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >                 write_unlock(&kvm->mmu_lock);
> >
> >         iter->yielded = true;
> > -       sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> > +       sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_KERNEL_ACCOUNT);
> >
> >         if (shared)
> >                 read_lock(&kvm->mmu_lock);
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index d48064503b88..a262e15ebd19 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1583,6 +1583,24 @@ void kvm_arch_sync_events(struct kvm *kvm);
> >  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
> >
> >  struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn);
> > +
> > +/*
> > + * Tells the appropriate NUMA node location of the page table's page based on
> > + * pfn it will point to.
> > + *
> > + * Return the nid of the page if pfn is valid and backed by a refcounted page,
> > + * otherwise, return the nearest memory node for the current CPU.
>
> Nit: Should this be "current thread"?

I will say "current thread CPU". As memory nodes are near to CPUs
whereas threads can execute on multiple CPUs throughout its lifetime.

>
> > + */
> > +static inline int kvm_pfn_to_page_table_nid(kvm_pfn_t pfn)
>
> This could just be kvm_pfn_nid (or even better kvm_pfn_node_id) since
> this really has nothing to do with page tables. We just want to know
> which NUMA node backs the given PFN.

Apart from NUMA node backing the given PFN, it can also return the
nearest NUMA node via numa_mem_id(). So, it is actually telling which
NUMA node is the best one for the page table's page, given a PFN.


>
> > +{
> > +       struct page *page = kvm_pfn_to_refcounted_page(pfn);
> > +
> > +       if (page)
> > +               return page_to_nid(page);
> > +       else
> > +               return numa_mem_id();
> > +}
> > +
> >  bool kvm_is_zone_device_page(struct page *page);
> >
> >  struct kvm_irq_ack_notifier {
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-27 19:09   ` Ben Gardon
@ 2022-12-28 22:07     ` Vipin Sharma
  2022-12-29 18:22       ` Ben Gardon
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-28 22:07 UTC (permalink / raw)
  To: Ben Gardon; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Tue, Dec 27, 2022 at 11:10 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> > this cache should allocate memory from. Default initialize to
> > NUMA_NO_NODE in all architectures.
> >
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/arm64/kvm/arm.c      |  2 +-
> >  arch/arm64/kvm/mmu.c      |  4 +++-
> >  arch/mips/kvm/mips.c      |  2 ++
> >  arch/riscv/kvm/mmu.c      |  2 +-
> >  arch/riscv/kvm/vcpu.c     |  2 +-
> >  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
> >  include/linux/kvm_host.h  |  6 ++++++
> >  include/linux/kvm_types.h |  2 ++
> >  8 files changed, 28 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9c5573bc4614..52a41f4532e2 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >         vcpu->arch.target = -1;
> >         bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> >
> > -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> >
> >         /*
> >          * Default value for the FP state, will be overloaded at load
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 31d7fa4c7c14..bd07155e17fa 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> >  {
> >         phys_addr_t addr;
> >         int ret = 0;
> > -       struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > +       struct kvm_mmu_memory_cache cache;
> >         struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> >         enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> >                                      KVM_PGTABLE_PROT_R |
> >                                      (writable ? KVM_PGTABLE_PROT_W : 0);
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
> > +
> >         if (is_protected_kvm_enabled())
> >                 return -EPERM;
> >
> > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> > index a25e0b73ee70..b017c29a9340 100644
> > --- a/arch/mips/kvm/mips.c
> > +++ b/arch/mips/kvm/mips.c
> > @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >                      HRTIMER_MODE_REL);
> >         vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
> >
> > +       vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> > +
>
> It looks weird to have MIPS not using the initialization MACRO. Should
> it just have a GFP_ZERO parameter?

MIPS is not setting GFP_ZERO explicitly before my series, so, I didn't
make it GFP_ZERO. I am not sure if MIPS needs it or not, I tried to
keep the same functionality in my patch.

May be someone from MIPS can tell more about it.

>
> >         /*
> >          * Allocate space for host mode exception handlers that handle
> >          * guest mode exits
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index 34b57e0be2ef..119de4520cc6 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> >         phys_addr_t addr, end;
> >         struct kvm_mmu_memory_cache pcache = {
> >                 .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> > -               .gfp_zero = __GFP_ZERO,
> >         };
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
> >         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> >         pfn = __phys_to_pfn(hpa);
> >
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 7c08567097f0..189b14feb365 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >
> >         /* Mark this VCPU never ran */
> >         vcpu->arch.ran_atleast_once = false;
> > -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> >         bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
> >
> >         /* Setup ISA features available to VCPU */
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 6f6a10d7a871..23a3b82b2384 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >  {
> >         int ret;
> >
> > -       vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> > -       vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> > +                                 pte_list_desc_cache, NUMA_NO_NODE);
> >
> > -       vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> > -       vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> > +                                 mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -       vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> > +                                 NULL, NUMA_NO_NODE);
> >         spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > @@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> >         node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
> >         kvm_page_track_register_notifier(kvm, node);
> >
> > -       kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> > -       kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> > +                                 mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -       kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> > +                                 NULL, NUMA_NO_NODE);
> >         spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
> >
> > -       kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> > -       kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> > +                                 pte_list_desc_cache, NUMA_NO_NODE);
> >
> >         return 0;
> >  }
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index a262e15ebd19..719687a37ef7 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
> >  /* Max number of entries allowed for each kvm dirty ring */
> >  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
> >
> > +#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({       \
> > +       (_cache)->kmem_cache = _kmem_cache;                             \
> > +       (_cache)->gfp_zero = __GFP_ZERO;                                \
> > +       (_cache)->node = _node;                                         \
> > +})
> > +
>
> Given that this initialization is probably not happening in a super
> hot path, is there any downside to just using a function for the
> initialization?
>

It can totally be a function as well. I will make it function in the
next version.


> >  #endif
> > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > index 76de36e56cdf..9c70ce95e51f 100644
> > --- a/include/linux/kvm_types.h
> > +++ b/include/linux/kvm_types.h
> > @@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
> >         struct kmem_cache *kmem_cache;
> >         int capacity;
> >         void **objects;
> > +       /* Node on which memory should be allocated by default */
> > +       int node;
> >  };
> >  #endif
> >
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages
  2022-12-27 19:34   ` Ben Gardon
@ 2022-12-28 22:08     ` Vipin Sharma
  2022-12-29 18:20       ` Ben Gardon
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2022-12-28 22:08 UTC (permalink / raw)
  To: Ben Gardon; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Tue, Dec 27, 2022 at 11:34 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > Page table pages of a VM are currently allocated based on the current
> > task's NUMA node or its mempolicy. This can cause suboptimal remote
> > accesses by the vCPU if it is accessing physical pages local to its NUMA
> > node but the page table pages mapping those physcal pages were created
> > by some other vCPU which was on different NUMA node or had different
> > policy.
> >
> > Allocate page table pages on the same NUMA node where underlying
> > physical page exists. Page table at level 5, 4, and 3 might not end up
> > on the same NUMA node as they can span multiple NUMA nodes.
>
> A page table at any level could map memory spanning multiple NUMA
> nodes, it just becomes more likely at higher levels.
> We're only guaranteed that a page table maps memory all on the same
> node if it's a split hugepage.

Even in this case, it is a best effort.

> This change can only guarantee that the page table pages are allocated
> on the same node as at least some of the memory they map.
> Of course in practice, the above is absolutely correct since we'd
> expect to have multi-GB continuous ranges of GFNs allocated on the
> same node via huge pages.
>
> And since the root pages are allocated based only on where the thread
> allocating them is running, they're not actually guaranteed to be on
> the same node as any of the memory they map. (Though they probably
> will be.)
>

I will add more details in the commit in the next version.

> >
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  2 +-
> >  arch/x86/kvm/mmu/mmu.c          | 63 ++++++++++++++++++++++-----------
> >  arch/x86/kvm/mmu/paging_tmpl.h  |  4 +--
> >  arch/x86/kvm/mmu/tdp_mmu.c      | 11 +++---
> >  virt/kvm/kvm_main.c             |  2 +-
> >  5 files changed, 53 insertions(+), 29 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 293994fabae3..b1f319ad6f89 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -782,7 +782,7 @@ struct kvm_vcpu_arch {
> >         struct kvm_mmu *walk_mmu;
> >
> >         struct kvm_mmu_memory_cache mmu_pte_list_desc_cache;
> > -       struct kvm_mmu_memory_cache mmu_shadow_page_cache;
> > +       struct kvm_mmu_memory_cache mmu_shadow_page_cache[MAX_NUMNODES];
> >         struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
> >         struct kvm_mmu_memory_cache mmu_page_header_cache;
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 23a3b82b2384..511c6ef265ee 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -677,24 +677,29 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> >
> >  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >  {
> > -       int r;
> > +       int r, nid;
> >
> >         /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
> >         r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> >                                        1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> >         if (r)
> >                 return r;
> > -       r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > -                                     &vcpu->arch.mmu_shadow_page_cache_lock,
> > -                                     PT64_ROOT_MAX_LEVEL);
> > -       if (r)
> > -               return r;
> > +
> > +       for_each_online_node(nid) {
> > +               r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
> > +                                             &vcpu->arch.mmu_shadow_page_cache_lock,
> > +                                             PT64_ROOT_MAX_LEVEL);
> > +               if (r)
> > +                       return r;
> > +       }
> > +
> >         if (maybe_indirect) {
> >                 r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_info_cache,
> >                                                PT64_ROOT_MAX_LEVEL);
> >                 if (r)
> >                         return r;
> >         }
> > +
> >         return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache,
> >                                           PT64_ROOT_MAX_LEVEL);
> >  }
> > @@ -715,9 +720,14 @@ static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> >
> >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> > +       int nid;
> > +
> >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> > -       mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > -                                &vcpu->arch.mmu_shadow_page_cache_lock);
> > +
> > +       for_each_node(nid)
> > +               mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
> > +                                        &vcpu->arch.mmu_shadow_page_cache_lock);
> > +
>
> Was just trying to think if there could be any issue with memory
> leakage if the online nodes changed, though IDK if any hardware does
> that.
> Still, it might be more robust to use ARRAY_SIZE and cover the whole array.

for_each_node() goes through all of the possible nodes on the system,
whereas, for_each_online_node() goes through only online nodes.
Current code seems right to me, let me know if I am overlooking
something.

>
> >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
> >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> >  }
> > @@ -2256,11 +2266,12 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> >
> >  static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
> >                                                     gfn_t gfn,
> > -                                                   union kvm_mmu_page_role role)
> > +                                                   union kvm_mmu_page_role role,
> > +                                                   int nid)
> >  {
> >         struct shadow_page_caches caches = {
> >                 .page_header_cache = &vcpu->arch.mmu_page_header_cache,
> > -               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
> > +               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache[nid],
> >                 .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> >                 .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
> >         };
> > @@ -2316,15 +2327,19 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,
> >
> >  static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu,
> >                                                  u64 *sptep, gfn_t gfn,
> > -                                                bool direct, unsigned int access)
> > +                                                bool direct, unsigned int access,
> > +                                                kvm_pfn_t pfn)
> >  {
> >         union kvm_mmu_page_role role;
> > +       int nid;
> >
> >         if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
> >                 return ERR_PTR(-EEXIST);
> >
> >         role = kvm_mmu_child_role(sptep, direct, access);
> > -       return kvm_mmu_get_shadow_page(vcpu, gfn, role);
> > +       nid = kvm_pfn_to_page_table_nid(pfn);
> > +
> > +       return kvm_mmu_get_shadow_page(vcpu, gfn, role, nid);
> >  }
> >
> >  static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator,
> > @@ -3208,7 +3223,8 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >                 if (it.level == fault->goal_level)
> >                         break;
> >
> > -               sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL);
> > +               sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true,
> > +                                         ACC_ALL, fault->pfn);
> >                 if (sp == ERR_PTR(-EEXIST))
> >                         continue;
> >
> > @@ -3636,7 +3652,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant,
> >         WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte);
> >         WARN_ON_ONCE(role.direct && role.has_4_byte_gpte);
> >
> > -       sp = kvm_mmu_get_shadow_page(vcpu, gfn, role);
> > +       sp = kvm_mmu_get_shadow_page(vcpu, gfn, role, numa_mem_id());
> >         ++sp->root_count;
> >
> >         return __pa(sp->spt);
> > @@ -5952,7 +5968,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
> >
> >  int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >  {
> > -       int ret;
> > +       int ret, nid;
> >
> >         INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> >                                   pte_list_desc_cache, NUMA_NO_NODE);
> > @@ -5960,8 +5976,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >         INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> >                                   mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> > -                                 NULL, NUMA_NO_NODE);
> > +       for_each_node(nid)
> > +               INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache[nid],
> > +                                         NULL, nid);
> >         spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > @@ -6692,13 +6709,17 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> >  }
> >
> >  static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
> > +                                     int cache_count,
> >                                       spinlock_t *cache_lock)
> >  {
> >         unsigned long freed = 0;
> > +       int nid;
> >
> >         spin_lock(cache_lock);
> > -       if (cache->nobjs)
> > -               freed = kvm_mmu_empty_memory_cache(cache);
> > +       for (nid = 0; nid < cache_count; nid++) {
> > +               if (node_online(nid) && cache[nid].nobjs)
>
> Is there any reason to keep the cache if !node_online(nid)?
> Actually, I'd also just drop the cache_count argument and always
> iterate over the entire array, only checking nobjs. There's no
> guarantee I'm aware of that the set of nodes has a sequential series
> of IDs starting at 0 and you'd get a bug if that wasn't the case since
> it only iterates to  nid < cache_count here but some of the earlier
> nids might not have been online.
>

This is just temporary and will be removed in the next patch in the series.

mmu_shrink_cache() is used for both split_shadow_page_cache (single
object) and mmu_shadow_page_cache[MAX_NUMANODES].

In next patch of this series, I used for_each_online_node(nide), I
will change it to for_each_node() in the next version.

> > +                       freed += kvm_mmu_empty_memory_cache(&cache[nid]);
> > +       }
> >         spin_unlock(cache_lock);
> >         return freed;
> >  }
> > @@ -6721,13 +6742,15 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >                 list_move_tail(&kvm->vm_list, &vm_list);
> >
> >                 freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
> > +                                         1,
>
> So lonely.
> One.
> All by itself,
> with only a coma for company.
>
> NIT: This could be merged to the previous or subsequent lines.

This is a strong and independent '1'.

>
> >                                           &kvm->arch.split_shadow_page_cache_lock);
> >
> >                 if (freed >= sc->nr_to_scan)
> >                         break;
> >
> >                 kvm_for_each_vcpu(i, vcpu, kvm) {
> > -                       freed += mmu_shrink_cache(&vcpu->arch.mmu_shadow_page_cache,
> > +                       freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
> > +                                                 MAX_NUMNODES,
> >                                                   &vcpu->arch.mmu_shadow_page_cache_lock);
> >                 }
> >
> > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> > index e5662dbd519c..1ceca62ec4cf 100644
> > --- a/arch/x86/kvm/mmu/paging_tmpl.h
> > +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> > @@ -652,7 +652,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> >                 table_gfn = gw->table_gfn[it.level - 2];
> >                 access = gw->pt_access[it.level - 2];
> >                 sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn,
> > -                                         false, access);
> > +                                         false, access, fault->pfn);
> >
> >                 if (sp != ERR_PTR(-EEXIST)) {
> >                         /*
> > @@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> >                 validate_direct_spte(vcpu, it.sptep, direct_access);
> >
> >                 sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn,
> > -                                         true, direct_access);
> > +                                         true, direct_access, fault->pfn);
> >                 if (sp == ERR_PTR(-EEXIST))
> >                         continue;
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 376b8dceb3f9..b5abae2366dd 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -259,12 +259,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
> >                     kvm_mmu_page_as_id(_root) != _as_id) {              \
> >                 } else
> >
> > -static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> > +static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu, int nid)
> >  {
> >         struct kvm_mmu_page *sp;
> >
> >         sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> > -       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> > +       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache[nid],
> >                                                 &vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >         return sp;
> > @@ -317,7 +317,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
> >                         goto out;
> >         }
> >
> > -       root = tdp_mmu_alloc_sp(vcpu);
> > +       root = tdp_mmu_alloc_sp(vcpu, numa_mem_id());
>
> Might be worth calling out somewhere that the root page is just
> allocated based on where the thread allocating it runs.
>

How about a comment just up here or do you prefer at tdp_mmu_roots in
struct kvm_arch{}?

> >         tdp_mmu_init_sp(root, NULL, 0, role);
> >
> >         refcount_set(&root->tdp_mmu_root_count, 1);
> > @@ -1149,7 +1149,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >         struct kvm *kvm = vcpu->kvm;
> >         struct tdp_iter iter;
> >         struct kvm_mmu_page *sp;
> > -       int ret = RET_PF_RETRY;
> > +       int ret = RET_PF_RETRY, nid;
> >
> >         kvm_mmu_hugepage_adjust(vcpu, fault);
> >
> > @@ -1178,11 +1178,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >                     !is_large_pte(iter.old_spte))
> >                         continue;
> >
> > +               nid = kvm_pfn_to_page_table_nid(fault->pfn);
> >                 /*
> >                  * The SPTE is either non-present or points to a huge page that
> >                  * needs to be split.
> >                  */
> > -               sp = tdp_mmu_alloc_sp(vcpu);
> > +               sp = tdp_mmu_alloc_sp(vcpu, nid);
> >                 tdp_mmu_init_child_sp(sp, &iter);
> >
> >                 sp->nx_huge_page_disallowed = fault->huge_page_disallowed;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index d96c8146e9ba..4f3db7ffeba8 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -415,7 +415,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
> >         if (mc->kmem_cache)
> >                 return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
> >         else
> > -               return (void *)__get_free_page(gfp_flags);
> > +               return kvm_mmu_get_free_page(mc->node, gfp_flags);
>
> You could do part of this change in the commit that introduced
> kvm_mmu_get_free_page too.

Yeah, I can do it there as well. No strong opinions. I will update in
the next version.

> >  }
> >
> >  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware
  2022-12-27 19:42   ` Ben Gardon
@ 2022-12-28 22:08     ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-28 22:08 UTC (permalink / raw)
  To: Ben Gardon; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Tue, Dec 27, 2022 at 11:43 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > Make split_shadow_page_cache NUMA aware and allocate page table's pages
> > during the split based on the underlying physical page's NUMA node.
> >
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  2 +-
> >  arch/x86/kvm/mmu/mmu.c          | 50 ++++++++++++++++++---------------
> >  2 files changed, 29 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index b1f319ad6f89..7b3f36ae37a4 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1410,7 +1410,7 @@ struct kvm_arch {
> >          *
> >          * Protected by kvm->slots_lock.
> >          */
> > -       struct kvm_mmu_memory_cache split_shadow_page_cache;
> > +       struct kvm_mmu_memory_cache split_shadow_page_cache[MAX_NUMNODES];
> >         struct kvm_mmu_memory_cache split_page_header_cache;
> >
> >         /*
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 511c6ef265ee..7454bfc49a51 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -6126,7 +6126,7 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> >  int kvm_mmu_init_vm(struct kvm *kvm)
> >  {
> >         struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
> > -       int r;
> > +       int r, nid;
> >
> >         INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
> >         INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
> > @@ -6145,8 +6145,9 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> >         INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> >                                   mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> > -                                 NULL, NUMA_NO_NODE);
> > +       for_each_node(nid)
>
> Again, assuming no one sets CONFIG_NODE_SHIFT to a ridiculous value,
> it would probably be fine to initialize the entire array here since
> that doesn't take any extra memory and we're not in a super hot path.

This goes through the entire array. I think you are confusing it with
for_each_online_node().

>
> > +               INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache[nid],
> > +                                         NULL, NUMA_NO_NODE);
> >         spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
> >
> >         INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> > @@ -6157,10 +6158,13 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> >
> >  static void mmu_free_vm_memory_caches(struct kvm *kvm)
> >  {
> > +       int nid;
> > +
> >         kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache);
> >         kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache);
> > -       mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
> > -                                &kvm->arch.split_shadow_page_cache_lock);
> > +       for_each_node(nid)
>
> Again, could just iterate over the whole array here.
>
> > +               mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid],
> > +                                        &kvm->arch.split_shadow_page_cache_lock);
> >  }
> >
> >  void kvm_mmu_uninit_vm(struct kvm *kvm)
> > @@ -6269,7 +6273,7 @@ static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min)
> >         return kvm_mmu_memory_cache_nr_free_objects(cache) < min;
> >  }
> >
> > -static bool need_topup_split_caches_or_resched(struct kvm *kvm)
> > +static bool need_topup_split_caches_or_resched(struct kvm *kvm, int nid)
> >  {
> >         if (need_resched() || rwlock_needbreak(&kvm->mmu_lock))
> >                 return true;
> > @@ -6281,10 +6285,10 @@ static bool need_topup_split_caches_or_resched(struct kvm *kvm)
> >          */
> >         return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_MIN_NR_OBJECTS) ||
> >                need_topup(&kvm->arch.split_page_header_cache, 1) ||
> > -              need_topup(&kvm->arch.split_shadow_page_cache, 1);
> > +              need_topup(&kvm->arch.split_shadow_page_cache[nid], 1);
> >  }
> >
> > -static int topup_split_caches(struct kvm *kvm)
> > +static int topup_split_caches(struct kvm *kvm, int nid)
> >  {
> >         /*
> >          * Allocating rmap list entries when splitting huge pages for nested
> > @@ -6314,18 +6318,21 @@ static int topup_split_caches(struct kvm *kvm)
> >         if (r)
> >                 return r;
> >
> > -       return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache,
> > +       return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid],
> >                                          &kvm->arch.split_shadow_page_cache_lock,
> >                                          1);
> >  }
> >
> > -static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep)
> > +static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm,
> > +                                                       u64 *huge_sptep,
> > +                                                       u64 huge_spte)
>
> These can go on the same line.

Git diff is showing it weirdly. They are aligned to "struct kvm *kvm"
and both will be on different lines to keep them in the 80 char limit.


>
> >  {
> >         struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
> >         struct shadow_page_caches caches = {};
> >         union kvm_mmu_page_role role;
> >         unsigned int access;
> >         gfn_t gfn;
> > +       int nid;
> >
> >         gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
> >         access = kvm_mmu_page_get_access(huge_sp, spte_index(huge_sptep));
> > @@ -6338,9 +6345,11 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
> >          */
> >         role = kvm_mmu_child_role(huge_sptep, /*direct=*/true, access);
> >
> > +       nid = kvm_pfn_to_page_table_nid(spte_to_pfn(huge_spte));
> > +
> >         /* Direct SPs do not require a shadowed_info_cache. */
> >         caches.page_header_cache = &kvm->arch.split_page_header_cache;
> > -       caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> > +       caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache[nid];
> >         caches.shadow_page_cache_lock = &kvm->arch.split_shadow_page_cache_lock;
> >
> >         /* Safe to pass NULL for vCPU since requesting a direct SP. */
> > @@ -6360,7 +6369,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm,
> >         gfn_t gfn;
> >         int index;
> >
> > -       sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep);
> > +       sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep, huge_spte);
> >
> >         for (index = 0; index < SPTE_ENT_PER_PAGE; index++) {
> >                 sptep = &sp->spt[index];
> > @@ -6398,7 +6407,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
> >                                           u64 *huge_sptep)
> >  {
> >         struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
> > -       int level, r = 0;
> > +       int level, r = 0, nid;
> >         gfn_t gfn;
> >         u64 spte;
> >
> > @@ -6406,13 +6415,14 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
> >         gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
> >         level = huge_sp->role.level;
> >         spte = *huge_sptep;
> > +       nid = kvm_pfn_to_page_table_nid(spte_to_pfn(spte));
> >
> >         if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) {
> >                 r = -ENOSPC;
> >                 goto out;
> >         }
> >
> > -       if (need_topup_split_caches_or_resched(kvm)) {
> > +       if (need_topup_split_caches_or_resched(kvm, nid)) {
> >                 write_unlock(&kvm->mmu_lock);
> >                 cond_resched();
> >                 /*
> > @@ -6420,7 +6430,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
> >                  * rmap iterator should be restarted because the MMU lock was
> >                  * dropped.
> >                  */
> > -               r = topup_split_caches(kvm) ?: -EAGAIN;
> > +               r = topup_split_caches(kvm, nid) ?: -EAGAIN;
> >                 write_lock(&kvm->mmu_lock);
> >                 goto out;
> >         }
> > @@ -6709,17 +6719,15 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> >  }
> >
> >  static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
> > -                                     int cache_count,
> >                                       spinlock_t *cache_lock)
> >  {
> >         unsigned long freed = 0;
> >         int nid;
> >
> >         spin_lock(cache_lock);
> > -       for (nid = 0; nid < cache_count; nid++) {
> > -               if (node_online(nid) && cache[nid].nobjs)
> > +       for_each_online_node(nid)
> > +               if (cache[nid].nobjs)
> >                         freed += kvm_mmu_empty_memory_cache(&cache[nid]);
> > -       }
> >         spin_unlock(cache_lock);
> >         return freed;
> >  }
> > @@ -6741,8 +6749,7 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >                         first_kvm = kvm;
> >                 list_move_tail(&kvm->vm_list, &vm_list);
> >
> > -               freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
> > -                                         1,
> > +               freed += mmu_shrink_cache(kvm->arch.split_shadow_page_cache,
> >                                           &kvm->arch.split_shadow_page_cache_lock);
> >
> >                 if (freed >= sc->nr_to_scan)
> > @@ -6750,7 +6757,6 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >
> >                 kvm_for_each_vcpu(i, vcpu, kvm) {
> >                         freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
> > -                                                 MAX_NUMNODES,
> >                                                   &vcpu->arch.mmu_shadow_page_cache_lock);
> >                 }
> >
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL
  2022-12-27 19:52   ` Ben Gardon
@ 2022-12-28 22:08     ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2022-12-28 22:08 UTC (permalink / raw)
  To: Ben Gardon; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Tue, Dec 27, 2022 at 11:52 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE is set to 40 without any specific
> > reason. Reduce default size to PT64_ROOT_MAX_LEVEL, which is currently
> > 5.
> >
> > Change mmu_pte_list_desc_cache size to what is needed as it is more than
> > 5 but way less than 40.
>
> Why do you say more than 5? At least to resolve a page fault we'll
> never need more than 4 pages on a system with 5 level paging since the
> root is already allocated.

Because of the comment in code:
> >         /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
> > -       r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> > -                                      1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);

>
> >
> > Tested by running dirty_log_perf_test on both tdp and shadow MMU with 48
> > vcpu and 2GB/vcpu size on a 2 NUMA node machine. No impact on
> > performance noticed.
> >
> > Ran perf on dirty_log_perf_test and found kvm_mmu_get_free_page() calls
> > reduced by ~3300 which is near to 48 (vcpus) * 2 (nodes) * 35 (cache
> > size).
> >
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/include/asm/kvm_types.h | 2 +-
> >  arch/x86/kvm/mmu/mmu.c           | 7 ++++---
> >  2 files changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_types.h b/arch/x86/include/asm/kvm_types.h
> > index 08f1b57d3b62..752dab218a62 100644
> > --- a/arch/x86/include/asm/kvm_types.h
> > +++ b/arch/x86/include/asm/kvm_types.h
> > @@ -2,6 +2,6 @@
> >  #ifndef _ASM_X86_KVM_TYPES_H
> >  #define _ASM_X86_KVM_TYPES_H
> >
> > -#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40
> > +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE PT64_ROOT_MAX_LEVEL
>
> Please add a comment explaining why this value was chosen.

Okay


>
> >
> >  #endif /* _ASM_X86_KVM_TYPES_H */
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 7454bfc49a51..f89d933ff380 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -677,11 +677,12 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> >
> >  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >  {
> > -       int r, nid;
> > +       int r, nid, desc_capacity;
> >
> >         /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
> > -       r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> > -                                      1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> > +       desc_capacity = 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM;
> > +       r = __kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> > +                                        desc_capacity, desc_capacity);
> >         if (r)
> >                 return r;
> >
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages
  2022-12-28 22:08     ` Vipin Sharma
@ 2022-12-29 18:20       ` Ben Gardon
  0 siblings, 0 replies; 47+ messages in thread
From: Ben Gardon @ 2022-12-29 18:20 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 28, 2022 at 2:08 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> On Tue, Dec 27, 2022 at 11:34 AM Ben Gardon <bgardon@google.com> wrote:
> >
> > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > >
> > > Page table pages of a VM are currently allocated based on the current
> > > task's NUMA node or its mempolicy. This can cause suboptimal remote
> > > accesses by the vCPU if it is accessing physical pages local to its NUMA
> > > node but the page table pages mapping those physcal pages were created
> > > by some other vCPU which was on different NUMA node or had different
> > > policy.
> > >
> > > Allocate page table pages on the same NUMA node where underlying
> > > physical page exists. Page table at level 5, 4, and 3 might not end up
> > > on the same NUMA node as they can span multiple NUMA nodes.
> >
> > A page table at any level could map memory spanning multiple NUMA
> > nodes, it just becomes more likely at higher levels.
> > We're only guaranteed that a page table maps memory all on the same
> > node if it's a split hugepage.
>
> Even in this case, it is a best effort.
>
> > This change can only guarantee that the page table pages are allocated
> > on the same node as at least some of the memory they map.
> > Of course in practice, the above is absolutely correct since we'd
> > expect to have multi-GB continuous ranges of GFNs allocated on the
> > same node via huge pages.
> >
> > And since the root pages are allocated based only on where the thread
> > allocating them is running, they're not actually guaranteed to be on
> > the same node as any of the memory they map. (Though they probably
> > will be.)
> >
>
> I will add more details in the commit in the next version.
>
> > >
> > > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > > ---
> > >  arch/x86/include/asm/kvm_host.h |  2 +-
> > >  arch/x86/kvm/mmu/mmu.c          | 63 ++++++++++++++++++++++-----------
> > >  arch/x86/kvm/mmu/paging_tmpl.h  |  4 +--
> > >  arch/x86/kvm/mmu/tdp_mmu.c      | 11 +++---
> > >  virt/kvm/kvm_main.c             |  2 +-
> > >  5 files changed, 53 insertions(+), 29 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > > index 293994fabae3..b1f319ad6f89 100644
> > > --- a/arch/x86/include/asm/kvm_host.h
> > > +++ b/arch/x86/include/asm/kvm_host.h
> > > @@ -782,7 +782,7 @@ struct kvm_vcpu_arch {
> > >         struct kvm_mmu *walk_mmu;
> > >
> > >         struct kvm_mmu_memory_cache mmu_pte_list_desc_cache;
> > > -       struct kvm_mmu_memory_cache mmu_shadow_page_cache;
> > > +       struct kvm_mmu_memory_cache mmu_shadow_page_cache[MAX_NUMNODES];
> > >         struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
> > >         struct kvm_mmu_memory_cache mmu_page_header_cache;
> > >
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 23a3b82b2384..511c6ef265ee 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -677,24 +677,29 @@ static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > >
> > >  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> > >  {
> > > -       int r;
> > > +       int r, nid;
> > >
> > >         /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
> > >         r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
> > >                                        1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> > >         if (r)
> > >                 return r;
> > > -       r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > > -                                     &vcpu->arch.mmu_shadow_page_cache_lock,
> > > -                                     PT64_ROOT_MAX_LEVEL);
> > > -       if (r)
> > > -               return r;
> > > +
> > > +       for_each_online_node(nid) {
> > > +               r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
> > > +                                             &vcpu->arch.mmu_shadow_page_cache_lock,
> > > +                                             PT64_ROOT_MAX_LEVEL);
> > > +               if (r)
> > > +                       return r;
> > > +       }
> > > +
> > >         if (maybe_indirect) {
> > >                 r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_info_cache,
> > >                                                PT64_ROOT_MAX_LEVEL);
> > >                 if (r)
> > >                         return r;
> > >         }
> > > +
> > >         return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache,
> > >                                           PT64_ROOT_MAX_LEVEL);
> > >  }
> > > @@ -715,9 +720,14 @@ static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > >
> > >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> > >  {
> > > +       int nid;
> > > +
> > >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> > > -       mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > > -                                &vcpu->arch.mmu_shadow_page_cache_lock);
> > > +
> > > +       for_each_node(nid)
> > > +               mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache[nid],
> > > +                                        &vcpu->arch.mmu_shadow_page_cache_lock);
> > > +
> >
> > Was just trying to think if there could be any issue with memory
> > leakage if the online nodes changed, though IDK if any hardware does
> > that.
> > Still, it might be more robust to use ARRAY_SIZE and cover the whole array.
>
> for_each_node() goes through all of the possible nodes on the system,
> whereas, for_each_online_node() goes through only online nodes.
> Current code seems right to me, let me know if I am overlooking
> something.

Ah okay, I didn't see the distinction. That sounds good to me.

>
> >
> > >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
> > >         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> > >  }
> > > @@ -2256,11 +2266,12 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> > >
> > >  static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
> > >                                                     gfn_t gfn,
> > > -                                                   union kvm_mmu_page_role role)
> > > +                                                   union kvm_mmu_page_role role,
> > > +                                                   int nid)
> > >  {
> > >         struct shadow_page_caches caches = {
> > >                 .page_header_cache = &vcpu->arch.mmu_page_header_cache,
> > > -               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
> > > +               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache[nid],
> > >                 .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> > >                 .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
> > >         };
> > > @@ -2316,15 +2327,19 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,
> > >
> > >  static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu,
> > >                                                  u64 *sptep, gfn_t gfn,
> > > -                                                bool direct, unsigned int access)
> > > +                                                bool direct, unsigned int access,
> > > +                                                kvm_pfn_t pfn)
> > >  {
> > >         union kvm_mmu_page_role role;
> > > +       int nid;
> > >
> > >         if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
> > >                 return ERR_PTR(-EEXIST);
> > >
> > >         role = kvm_mmu_child_role(sptep, direct, access);
> > > -       return kvm_mmu_get_shadow_page(vcpu, gfn, role);
> > > +       nid = kvm_pfn_to_page_table_nid(pfn);
> > > +
> > > +       return kvm_mmu_get_shadow_page(vcpu, gfn, role, nid);
> > >  }
> > >
> > >  static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator,
> > > @@ -3208,7 +3223,8 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > >                 if (it.level == fault->goal_level)
> > >                         break;
> > >
> > > -               sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL);
> > > +               sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true,
> > > +                                         ACC_ALL, fault->pfn);
> > >                 if (sp == ERR_PTR(-EEXIST))
> > >                         continue;
> > >
> > > @@ -3636,7 +3652,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant,
> > >         WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte);
> > >         WARN_ON_ONCE(role.direct && role.has_4_byte_gpte);
> > >
> > > -       sp = kvm_mmu_get_shadow_page(vcpu, gfn, role);
> > > +       sp = kvm_mmu_get_shadow_page(vcpu, gfn, role, numa_mem_id());
> > >         ++sp->root_count;
> > >
> > >         return __pa(sp->spt);
> > > @@ -5952,7 +5968,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
> > >
> > >  int kvm_mmu_create(struct kvm_vcpu *vcpu)
> > >  {
> > > -       int ret;
> > > +       int ret, nid;
> > >
> > >         INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> > >                                   pte_list_desc_cache, NUMA_NO_NODE);
> > > @@ -5960,8 +5976,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> > >         INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> > >                                   mmu_page_header_cache, NUMA_NO_NODE);
> > >
> > > -       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> > > -                                 NULL, NUMA_NO_NODE);
> > > +       for_each_node(nid)
> > > +               INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache[nid],
> > > +                                         NULL, nid);
> > >         spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> > >
> > >         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > > @@ -6692,13 +6709,17 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> > >  }
> > >
> > >  static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache,
> > > +                                     int cache_count,
> > >                                       spinlock_t *cache_lock)
> > >  {
> > >         unsigned long freed = 0;
> > > +       int nid;
> > >
> > >         spin_lock(cache_lock);
> > > -       if (cache->nobjs)
> > > -               freed = kvm_mmu_empty_memory_cache(cache);
> > > +       for (nid = 0; nid < cache_count; nid++) {
> > > +               if (node_online(nid) && cache[nid].nobjs)
> >
> > Is there any reason to keep the cache if !node_online(nid)?
> > Actually, I'd also just drop the cache_count argument and always
> > iterate over the entire array, only checking nobjs. There's no
> > guarantee I'm aware of that the set of nodes has a sequential series
> > of IDs starting at 0 and you'd get a bug if that wasn't the case since
> > it only iterates to  nid < cache_count here but some of the earlier
> > nids might not have been online.
> >
>
> This is just temporary and will be removed in the next patch in the series.
>
> mmu_shrink_cache() is used for both split_shadow_page_cache (single
> object) and mmu_shadow_page_cache[MAX_NUMANODES].
>
> In next patch of this series, I used for_each_online_node(nide), I
> will change it to for_each_node() in the next version.
>
> > > +                       freed += kvm_mmu_empty_memory_cache(&cache[nid]);
> > > +       }
> > >         spin_unlock(cache_lock);
> > >         return freed;
> > >  }
> > > @@ -6721,13 +6742,15 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >                 list_move_tail(&kvm->vm_list, &vm_list);
> > >
> > >                 freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache,
> > > +                                         1,
> >
> > So lonely.
> > One.
> > All by itself,
> > with only a coma for company.
> >
> > NIT: This could be merged to the previous or subsequent lines.
>
> This is a strong and independent '1'.
>
> >
> > >                                           &kvm->arch.split_shadow_page_cache_lock);
> > >
> > >                 if (freed >= sc->nr_to_scan)
> > >                         break;
> > >
> > >                 kvm_for_each_vcpu(i, vcpu, kvm) {
> > > -                       freed += mmu_shrink_cache(&vcpu->arch.mmu_shadow_page_cache,
> > > +                       freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache,
> > > +                                                 MAX_NUMNODES,
> > >                                                   &vcpu->arch.mmu_shadow_page_cache_lock);
> > >                 }
> > >
> > > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> > > index e5662dbd519c..1ceca62ec4cf 100644
> > > --- a/arch/x86/kvm/mmu/paging_tmpl.h
> > > +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> > > @@ -652,7 +652,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> > >                 table_gfn = gw->table_gfn[it.level - 2];
> > >                 access = gw->pt_access[it.level - 2];
> > >                 sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn,
> > > -                                         false, access);
> > > +                                         false, access, fault->pfn);
> > >
> > >                 if (sp != ERR_PTR(-EEXIST)) {
> > >                         /*
> > > @@ -708,7 +708,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> > >                 validate_direct_spte(vcpu, it.sptep, direct_access);
> > >
> > >                 sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn,
> > > -                                         true, direct_access);
> > > +                                         true, direct_access, fault->pfn);
> > >                 if (sp == ERR_PTR(-EEXIST))
> > >                         continue;
> > >
> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > > index 376b8dceb3f9..b5abae2366dd 100644
> > > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > > @@ -259,12 +259,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
> > >                     kvm_mmu_page_as_id(_root) != _as_id) {              \
> > >                 } else
> > >
> > > -static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> > > +static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu, int nid)
> > >  {
> > >         struct kvm_mmu_page *sp;
> > >
> > >         sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> > > -       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> > > +       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache[nid],
> > >                                                 &vcpu->arch.mmu_shadow_page_cache_lock);
> > >
> > >         return sp;
> > > @@ -317,7 +317,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
> > >                         goto out;
> > >         }
> > >
> > > -       root = tdp_mmu_alloc_sp(vcpu);
> > > +       root = tdp_mmu_alloc_sp(vcpu, numa_mem_id());
> >
> > Might be worth calling out somewhere that the root page is just
> > allocated based on where the thread allocating it runs.
> >
>
> How about a comment just up here or do you prefer at tdp_mmu_roots in
> struct kvm_arch{}?

Here or just in the commit description or cover letter.
Thanks!

>
> > >         tdp_mmu_init_sp(root, NULL, 0, role);
> > >
> > >         refcount_set(&root->tdp_mmu_root_count, 1);
> > > @@ -1149,7 +1149,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > >         struct kvm *kvm = vcpu->kvm;
> > >         struct tdp_iter iter;
> > >         struct kvm_mmu_page *sp;
> > > -       int ret = RET_PF_RETRY;
> > > +       int ret = RET_PF_RETRY, nid;
> > >
> > >         kvm_mmu_hugepage_adjust(vcpu, fault);
> > >
> > > @@ -1178,11 +1178,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > >                     !is_large_pte(iter.old_spte))
> > >                         continue;
> > >
> > > +               nid = kvm_pfn_to_page_table_nid(fault->pfn);
> > >                 /*
> > >                  * The SPTE is either non-present or points to a huge page that
> > >                  * needs to be split.
> > >                  */
> > > -               sp = tdp_mmu_alloc_sp(vcpu);
> > > +               sp = tdp_mmu_alloc_sp(vcpu, nid);
> > >                 tdp_mmu_init_child_sp(sp, &iter);
> > >
> > >                 sp->nx_huge_page_disallowed = fault->huge_page_disallowed;
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index d96c8146e9ba..4f3db7ffeba8 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -415,7 +415,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
> > >         if (mc->kmem_cache)
> > >                 return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
> > >         else
> > > -               return (void *)__get_free_page(gfp_flags);
> > > +               return kvm_mmu_get_free_page(mc->node, gfp_flags);
> >
> > You could do part of this change in the commit that introduced
> > kvm_mmu_get_free_page too.
>
> Yeah, I can do it there as well. No strong opinions. I will update in
> the next version.
>
> > >  }
> > >
> > >  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
> > > --
> > > 2.39.0.314.g84b9a713c41-goog
> > >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-28 22:07     ` Vipin Sharma
@ 2022-12-29 18:22       ` Ben Gardon
  2023-01-03 17:36         ` Vipin Sharma
  0 siblings, 1 reply; 47+ messages in thread
From: Ben Gardon @ 2022-12-29 18:22 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel

On Wed, Dec 28, 2022 at 2:08 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> On Tue, Dec 27, 2022 at 11:10 AM Ben Gardon <bgardon@google.com> wrote:
> >
> > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > >
> > > Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> > > this cache should allocate memory from. Default initialize to
> > > NUMA_NO_NODE in all architectures.
> > >
> > > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > > ---
> > >  arch/arm64/kvm/arm.c      |  2 +-
> > >  arch/arm64/kvm/mmu.c      |  4 +++-
> > >  arch/mips/kvm/mips.c      |  2 ++
> > >  arch/riscv/kvm/mmu.c      |  2 +-
> > >  arch/riscv/kvm/vcpu.c     |  2 +-
> > >  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
> > >  include/linux/kvm_host.h  |  6 ++++++
> > >  include/linux/kvm_types.h |  2 ++
> > >  8 files changed, 28 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 9c5573bc4614..52a41f4532e2 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > >         vcpu->arch.target = -1;
> > >         bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> > >
> > > -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> > >
> > >         /*
> > >          * Default value for the FP state, will be overloaded at load
> > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > index 31d7fa4c7c14..bd07155e17fa 100644
> > > --- a/arch/arm64/kvm/mmu.c
> > > +++ b/arch/arm64/kvm/mmu.c
> > > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> > >  {
> > >         phys_addr_t addr;
> > >         int ret = 0;
> > > -       struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > > +       struct kvm_mmu_memory_cache cache;
> > >         struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> > >         enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> > >                                      KVM_PGTABLE_PROT_R |
> > >                                      (writable ? KVM_PGTABLE_PROT_W : 0);
> > >
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
> > > +
> > >         if (is_protected_kvm_enabled())
> > >                 return -EPERM;
> > >
> > > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> > > index a25e0b73ee70..b017c29a9340 100644
> > > --- a/arch/mips/kvm/mips.c
> > > +++ b/arch/mips/kvm/mips.c
> > > @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > >                      HRTIMER_MODE_REL);
> > >         vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
> > >
> > > +       vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> > > +
> >
> > It looks weird to have MIPS not using the initialization MACRO. Should
> > it just have a GFP_ZERO parameter?
>
> MIPS is not setting GFP_ZERO explicitly before my series, so, I didn't
> make it GFP_ZERO. I am not sure if MIPS needs it or not, I tried to
> keep the same functionality in my patch.
>
> May be someone from MIPS can tell more about it.

That makes sense, I just don't want to see MIPS get left behind
because we move the cache init logic to a macro or function. Folks
might update the init function but forget to update MIPS too.

>
> >
> > >         /*
> > >          * Allocate space for host mode exception handlers that handle
> > >          * guest mode exits
> > > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > > index 34b57e0be2ef..119de4520cc6 100644
> > > --- a/arch/riscv/kvm/mmu.c
> > > +++ b/arch/riscv/kvm/mmu.c
> > > @@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> > >         phys_addr_t addr, end;
> > >         struct kvm_mmu_memory_cache pcache = {
> > >                 .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> > > -               .gfp_zero = __GFP_ZERO,
> > >         };
> > >
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
> > >         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> > >         pfn = __phys_to_pfn(hpa);
> > >
> > > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > > index 7c08567097f0..189b14feb365 100644
> > > --- a/arch/riscv/kvm/vcpu.c
> > > +++ b/arch/riscv/kvm/vcpu.c
> > > @@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > >
> > >         /* Mark this VCPU never ran */
> > >         vcpu->arch.ran_atleast_once = false;
> > > -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> > >         bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
> > >
> > >         /* Setup ISA features available to VCPU */
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 6f6a10d7a871..23a3b82b2384 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> > >  {
> > >         int ret;
> > >
> > > -       vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> > > -       vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> > > +                                 pte_list_desc_cache, NUMA_NO_NODE);
> > >
> > > -       vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> > > -       vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> > > +                                 mmu_page_header_cache, NUMA_NO_NODE);
> > >
> > > -       vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> > > +                                 NULL, NUMA_NO_NODE);
> > >         spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> > >
> > >         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > > @@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> > >         node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
> > >         kvm_page_track_register_notifier(kvm, node);
> > >
> > > -       kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> > > -       kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> > > +                                 mmu_page_header_cache, NUMA_NO_NODE);
> > >
> > > -       kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> > > +                                 NULL, NUMA_NO_NODE);
> > >         spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
> > >
> > > -       kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> > > -       kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> > > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> > > +                                 pte_list_desc_cache, NUMA_NO_NODE);
> > >
> > >         return 0;
> > >  }
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index a262e15ebd19..719687a37ef7 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
> > >  /* Max number of entries allowed for each kvm dirty ring */
> > >  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
> > >
> > > +#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({       \
> > > +       (_cache)->kmem_cache = _kmem_cache;                             \
> > > +       (_cache)->gfp_zero = __GFP_ZERO;                                \
> > > +       (_cache)->node = _node;                                         \
> > > +})
> > > +
> >
> > Given that this initialization is probably not happening in a super
> > hot path, is there any downside to just using a function for the
> > initialization?
> >
>
> It can totally be a function as well. I will make it function in the
> next version.

Awesome, thanks.

>
>
> > >  #endif
> > > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > > index 76de36e56cdf..9c70ce95e51f 100644
> > > --- a/include/linux/kvm_types.h
> > > +++ b/include/linux/kvm_types.h
> > > @@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
> > >         struct kmem_cache *kmem_cache;
> > >         int capacity;
> > >         void **objects;
> > > +       /* Node on which memory should be allocated by default */
> > > +       int node;
> > >  };
> > >  #endif
> > >
> > > --
> > > 2.39.0.314.g84b9a713c41-goog
> > >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-28 22:07     ` Vipin Sharma
@ 2022-12-29 21:15       ` David Matlack
  2023-01-03 17:38         ` Vipin Sharma
  0 siblings, 1 reply; 47+ messages in thread
From: David Matlack @ 2022-12-29 21:15 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: Ben Gardon, seanjc, pbonzini, kvm, linux-kernel

On Wed, Dec 28, 2022 at 02:07:49PM -0800, Vipin Sharma wrote:
> On Tue, Dec 27, 2022 at 10:37 AM Ben Gardon <bgardon@google.com> wrote:
> > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > >
> > > Tested this change by running dirty_log_perf_test while dropping cache
> > > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> > > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> > > logs from kvm_mmu_memory_cache_alloc(), which is expected.
> >
> > Oh, that's not a good thing. I don't think we want to be hitting those
> > warnings. For one, kernel warnings should not be expected behavior,
> > probably for many reasons, but at least because Syzbot will find it.
> > In this particular case, we don't want to hit that because in that
> > case we'll try to do a GFP_ATOMIC, which can fail, and if it fails,
> > we'll BUG:
> >
> > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
> > {
> >         void *p;
> >
> >         if (WARN_ON(!mc->nobjs))
> >                 p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT);
> >         else
> >                 p = mc->objects[--mc->nobjs];
> >         BUG_ON(!p);
> >         return p;
> > }
> >
> > Perhaps the risk of actually panicking is small, but it probably
> > indicates that we need better error handling around failed allocations
> > from the cache.
> > Or, the slightly less elegant approach might be to just hold the cache
> > lock around the cache topup and use of pages from the cache, but
> > adding better error handling would probably be cleaner.
> 
> I was counting on the fact that shrinker will ideally run only in
> extreme cases, i.e. host is running on low memory. So, this WARN_ON
> will only be rarely used. I was not aware of Syzbot, it seems like it
> will be a concern if it does this kind of testing.

In an extreme low-memory situation, forcing vCPUS to do GFP_ATOMIC
allocations to handle page faults is risky. Plus it's a waste of time to
free that memory since it's just going to get immediately reallocated.

> 
> I thought about keeping a mutex, taking it during topup and releasing
> it after the whole operation is done but I stopped it as the duration
> of holding mutex will be long and might block the memory shrinker
> longer. I am not sure though, if this is a valid concern.

Use mutex_trylock() to skip any vCPUs that are currently handling page
faults.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
  2022-12-27 18:37   ` Ben Gardon
@ 2022-12-29 21:54   ` David Matlack
  2023-01-03 18:01     ` Vipin Sharma
  2023-01-03 19:32   ` Mingwei Zhang
  2023-01-16  4:14   ` kernel test robot
  3 siblings, 1 reply; 47+ messages in thread
From: David Matlack @ 2022-12-29 21:54 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Wed, Dec 21, 2022 at 06:34:49PM -0800, Vipin Sharma wrote:
> mmu_shrink_scan() is very disruptive to VMs. It picks the first
> VM in the vm_list, zaps the oldest page which is most likely an upper
> level SPTEs and most like to be reused. Prior to TDP MMU, this is even
> more disruptive in nested VMs case, considering L1 SPTEs will be the
> oldest even though most of the entries are for L2 SPTEs.
> 
> As discussed in
> https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
> shrinker logic has not be very useful in actually keeping VMs performant
> and reducing memory usage.
> 
> Change mmu_shrink_scan() to free pages from the vCPU's shadow page
> cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
> VM's performance should not be affected.

Can you split this commit up? e.g. First drop the old shrinking logic in
one commit (but leave the shrinking infrastructure in place). Then a
commit to make the shrinker free the per-vCPU shadow page caches. And
then perhaps another to make the shrinker free the per-VM shadow page
cache used for eager splitting.

> 
> This also allows to change cache capacities without worrying too much
> about high memory usage in cache.
> 
> Tested this change by running dirty_log_perf_test while dropping cache
> via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> logs from kvm_mmu_memory_cache_alloc(), which is expected.
> 
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   5 +
>  arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
>  arch/x86/kvm/mmu/mmu_internal.h |   2 +
>  arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
>  include/linux/kvm_host.h        |   1 +
>  virt/kvm/kvm_main.c             |  11 ++-
>  6 files changed, 114 insertions(+), 71 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index aa4eb8cfcd7e..89cc809e4a00 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
>  	struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
>  	struct kvm_mmu_memory_cache mmu_page_header_cache;
>  
> +	/*
> +	 * Protects change in size of mmu_shadow_page_cache cache.
> +	 */
> +	spinlock_t mmu_shadow_page_cache_lock;
> +
>  	/*
>  	 * QEMU userspace and the guest each have their own FPU state.
>  	 * In vcpu_run, we switch between the user and guest FPU contexts.
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 254bc46234e0..157417e1cb6e 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
>  
>  static struct kmem_cache *pte_list_desc_cache;
>  struct kmem_cache *mmu_page_header_cache;
> -static struct percpu_counter kvm_total_used_mmu_pages;
> +/*
> + * Total number of unused pages in MMU shadow page cache.
> + */
> +static struct percpu_counter kvm_total_unused_mmu_pages;
>  
>  static void mmu_spte_set(u64 *sptep, u64 spte);
>  
> @@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
>  	}
>  }
>  
> +static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> +				     spinlock_t *cache_lock)
> +{
> +	int orig_nobjs;
> +	int r;
> +
> +	spin_lock(cache_lock);
> +	orig_nobjs = cache->nobjs;
> +	r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
> +	if (orig_nobjs != cache->nobjs)
> +		percpu_counter_add(&kvm_total_unused_mmu_pages,
> +				   (cache->nobjs - orig_nobjs));
> +	spin_unlock(cache_lock);
> +	return r;
> +}
> +
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
>  	int r;
> @@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  				       1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>  	if (r)
>  		return r;
> -	r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -				       PT64_ROOT_MAX_LEVEL);
> +	r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> +				      &vcpu->arch.mmu_shadow_page_cache_lock);
>  	if (r)
>  		return r;
>  	if (maybe_indirect) {
> @@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  					  PT64_ROOT_MAX_LEVEL);
>  }
>  
> +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> +				     spinlock_t *cache_lock)
> +{
> +	int orig_nobjs;
> +
> +	spin_lock(cache_lock);
> +	orig_nobjs = cache->nobjs;
> +	kvm_mmu_free_memory_cache(cache);
> +	if (orig_nobjs)
> +		percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> +
> +	spin_unlock(cache_lock);
> +}

It would be nice to avoid adding these wrapper functions.

Once you add a mutex to protect the caches from being freed while vCPUs
are in the middle of a page fault you can drop the spin lock. After that
the only reason to have these wrappers is to update
kvm_total_unused_mmu_pages.

Do we really need kvm_total_unused_mmu_pages? Why not just dynamically
calculate the number of of unused pages in mmu_shrink_count()? Or just
estimate the count, e.g. num_vcpus * KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE?
Or have per-VM or per-vCPU shrinkers to avoid needing to do any
aggregation?

> +
>  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> -	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> +	mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> +				 &vcpu->arch.mmu_shadow_page_cache_lock);
>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);

mmu_shadowed_info_cache can be freed by the shrinker as well.

>  	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
>  }
>  #endif
>  
> -/*
> - * This value is the sum of all of the kvm instances's
> - * kvm->arch.n_used_mmu_pages values.  We need a global,
> - * aggregate version in order to make the slab shrinker
> - * faster
> - */
> -static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
> -{
> -	kvm->arch.n_used_mmu_pages += nr;
> -	percpu_counter_add(&kvm_total_used_mmu_pages, nr);
> -}
> -
>  static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -	kvm_mod_used_mmu_pages(kvm, +1);
> +	kvm->arch.n_used_mmu_pages++;
>  	kvm_account_pgtable_pages((void *)sp->spt, +1);
>  }
>  
>  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -	kvm_mod_used_mmu_pages(kvm, -1);
> +	kvm->arch.n_used_mmu_pages--;
>  	kvm_account_pgtable_pages((void *)sp->spt, -1);
>  }
>  
> @@ -2150,8 +2172,31 @@ struct shadow_page_caches {
>  	struct kvm_mmu_memory_cache *page_header_cache;
>  	struct kvm_mmu_memory_cache *shadow_page_cache;
>  	struct kvm_mmu_memory_cache *shadowed_info_cache;
> +	/*
> +	 * Protects change in size of shadow_page_cache cache.
> +	 */
> +	spinlock_t *shadow_page_cache_lock;
>  };
>  
> +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> +				    spinlock_t *cache_lock)
> +{
> +	int orig_nobjs;
> +	void *page;
> +
> +	if (!cache_lock) {
> +		spin_lock(cache_lock);
> +		orig_nobjs = shadow_page_cache->nobjs;
> +	}
> +	page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
> +	if (!cache_lock) {
> +		if (orig_nobjs)
> +			percpu_counter_dec(&kvm_total_unused_mmu_pages);
> +		spin_unlock(cache_lock);
> +	}
> +	return page;
> +}
> +
>  static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
>  						      struct shadow_page_caches *caches,
>  						      gfn_t gfn,
> @@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
>  	struct kvm_mmu_page *sp;
>  
>  	sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
> -	sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
> +	sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
> +						caches->shadow_page_cache_lock);
>  	if (!role.direct)
>  		sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
>  
> @@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
>  		.page_header_cache = &vcpu->arch.mmu_page_header_cache,
>  		.shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
>  		.shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> +		.shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
>  	};
>  
>  	return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> @@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>  
>  	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
>  
>  	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> @@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
>  		kvm_tdp_mmu_zap_invalidated_roots(kvm);
>  }
>  
> -static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> -{
> -	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> -}
> -
>  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>  			struct kvm_memory_slot *slot,
>  			struct kvm_page_track_notifier_node *node)
> @@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
>  	/* Direct SPs do not require a shadowed_info_cache. */
>  	caches.page_header_cache = &kvm->arch.split_page_header_cache;
>  	caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> +	caches.shadow_page_cache_lock = NULL;
>  
>  	/* Safe to pass NULL for vCPU since requesting a direct SP. */
>  	return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
>  static unsigned long
>  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  {
> -	struct kvm *kvm;
> -	int nr_to_scan = sc->nr_to_scan;
> +	struct kvm_mmu_memory_cache *cache;
> +	struct kvm *kvm, *first_kvm = NULL;
>  	unsigned long freed = 0;
> +	/* spinlock for memory cache */
> +	spinlock_t *cache_lock;
> +	struct kvm_vcpu *vcpu;
> +	unsigned long i;
>  
>  	mutex_lock(&kvm_lock);
>  
>  	list_for_each_entry(kvm, &vm_list, vm_list) {
> -		int idx;
> -		LIST_HEAD(invalid_list);
> -
> -		/*
> -		 * Never scan more than sc->nr_to_scan VM instances.
> -		 * Will not hit this condition practically since we do not try
> -		 * to shrink more than one VM and it is very unlikely to see
> -		 * !n_used_mmu_pages so many times.
> -		 */
> -		if (!nr_to_scan--)
> +		if (first_kvm == kvm)
>  			break;
> -		/*
> -		 * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> -		 * here. We may skip a VM instance errorneosly, but we do not
> -		 * want to shrink a VM that only started to populate its MMU
> -		 * anyway.
> -		 */
> -		if (!kvm->arch.n_used_mmu_pages &&
> -		    !kvm_has_zapped_obsolete_pages(kvm))
> -			continue;
> +		if (!first_kvm)
> +			first_kvm = kvm;
> +		list_move_tail(&kvm->vm_list, &vm_list);
>  
> -		idx = srcu_read_lock(&kvm->srcu);
> -		write_lock(&kvm->mmu_lock);
> +		kvm_for_each_vcpu(i, vcpu, kvm) {

What protects this from racing with vCPU creation/deletion?

> +			cache = &vcpu->arch.mmu_shadow_page_cache;
> +			cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock;
> +			if (READ_ONCE(cache->nobjs)) {
> +				spin_lock(cache_lock);
> +				freed += kvm_mmu_empty_memory_cache(cache);
> +				spin_unlock(cache_lock);
> +			}

What about freeing kvm->arch.split_shadow_page_cache as well?

>  
> -		if (kvm_has_zapped_obsolete_pages(kvm)) {
> -			kvm_mmu_commit_zap_page(kvm,
> -			      &kvm->arch.zapped_obsolete_pages);
> -			goto unlock;
>  		}
>  
> -		freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
> -
> -unlock:
> -		write_unlock(&kvm->mmu_lock);
> -		srcu_read_unlock(&kvm->srcu, idx);
> -
> -		/*
> -		 * unfair on small ones
> -		 * per-vm shrinkers cry out
> -		 * sadness comes quickly
> -		 */
> -		list_move_tail(&kvm->vm_list, &vm_list);
> -		break;
> +		if (freed >= sc->nr_to_scan)
> +			break;
>  	}
>  
> +	if (freed)
> +		percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
>  	mutex_unlock(&kvm_lock);
> +	percpu_counter_sync(&kvm_total_unused_mmu_pages);
>  	return freed;
>  }
>  
>  static unsigned long
>  mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
>  {
> -	return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> +	return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);
>  }
>  
>  static struct shrinker mmu_shrinker = {
> @@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
>  	if (!mmu_page_header_cache)
>  		goto out;
>  
> -	if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> +	if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
>  		goto out;
>  
>  	ret = register_shrinker(&mmu_shrinker, "x86-mmu");
> @@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
>  	return 0;
>  
>  out_shrinker:
> -	percpu_counter_destroy(&kvm_total_used_mmu_pages);
> +	percpu_counter_destroy(&kvm_total_unused_mmu_pages);
>  out:
>  	mmu_destroy_caches();
>  	return ret;
> @@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>  void kvm_mmu_vendor_module_exit(void)
>  {
>  	mmu_destroy_caches();
> -	percpu_counter_destroy(&kvm_total_used_mmu_pages);
> +	percpu_counter_destroy(&kvm_total_unused_mmu_pages);
>  	unregister_shrinker(&mmu_shrinker);
>  }
>  
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index ac00bfbf32f6..c2a342028b6a 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>  void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>  void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>  
> +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> +				    spinlock_t *cache_lock);
>  #endif /* __KVM_X86_MMU_INTERNAL_H */
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 764f7c87286f..4974fa96deff 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
>  	struct kvm_mmu_page *sp;
>  
>  	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> -	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> +	sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> +						&vcpu->arch.mmu_shadow_page_cache_lock);
>  
>  	return sp;
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 01aad8b74162..efd9b38ea9a2 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
>  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
>  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
> +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>  #endif
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 13e88297f999..f2d762878b97 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
>  	return mc->nobjs;
>  }
>  
> -void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
>  {
> +	int freed = mc->nobjs;
> +
>  	while (mc->nobjs) {
>  		if (mc->kmem_cache)
>  			kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
> @@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
>  			free_page((unsigned long)mc->objects[--mc->nobjs]);
>  	}
>  
> -	kvfree(mc->objects);
> +	return freed;
> +}
>  
> +void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> +{
> +	kvm_mmu_empty_memory_cache(mc);
> +	kvfree(mc->objects);
>  	mc->objects = NULL;
>  	mc->capacity = 0;
>  }
> -- 
> 2.39.0.314.g84b9a713c41-goog
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{}
  2022-12-22  2:34 ` [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{} Vipin Sharma
@ 2022-12-29 21:59   ` David Matlack
  0 siblings, 0 replies; 47+ messages in thread
From: David Matlack @ 2022-12-29 21:59 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Wed, Dec 21, 2022 at 06:34:50PM -0800, Vipin Sharma wrote:
> zapped_obsolete_pages list was used in struct kvm_arch{} to provide
> pages for KVM MMU shrinker. This is not needed now as KVM MMU shrinker
> has been repurposed to free shadow page caches and not
> zapped_obsolete_pages.
> 
> Remove zapped_obsolete_pages from struct kvm_arch{} and use local list
> in kvm_zap_obsolete_pages().
> 
> Signed-off-by: Vipin Sharma <vipinsh@google.com>

Reviewed-by: David Matlack <dmatlack@google.com>

> ---
>  arch/x86/include/asm/kvm_host.h | 1 -
>  arch/x86/kvm/mmu/mmu.c          | 8 ++++----
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 89cc809e4a00..f89f02e18080 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1215,7 +1215,6 @@ struct kvm_arch {
>  	u8 mmu_valid_gen;
>  	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
>  	struct list_head active_mmu_pages;
> -	struct list_head zapped_obsolete_pages;
>  	/*
>  	 * A list of kvm_mmu_page structs that, if zapped, could possibly be
>  	 * replaced by an NX huge page.  A shadow page is on this list if its
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 157417e1cb6e..3364760a1695 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5987,6 +5987,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
>  {
>  	struct kvm_mmu_page *sp, *node;
>  	int nr_zapped, batch = 0;
> +	LIST_HEAD(zapped_pages);

optional nit: The common name of this is invalid_list (see other callers
of __kvm_mmu_prepare_zap_page()).

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware
  2022-12-22  2:34 ` [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware Vipin Sharma
@ 2022-12-29 22:05   ` David Matlack
  0 siblings, 0 replies; 47+ messages in thread
From: David Matlack @ 2022-12-29 22:05 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Wed, Dec 21, 2022 at 06:34:52PM -0800, Vipin Sharma wrote:
> Add a numa_aware_page_table module param to make page tables NUMA aware.

Generally it's not good practice to introduce dead code. So I would
request merging this with the next patch.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split
  2022-12-22  2:34 ` [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split Vipin Sharma
  2022-12-27 19:02   ` Ben Gardon
@ 2022-12-29 22:30   ` David Matlack
  2023-01-03 18:26     ` Vipin Sharma
  1 sibling, 1 reply; 47+ messages in thread
From: David Matlack @ 2022-12-29 22:30 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Wed, Dec 21, 2022 at 06:34:53PM -0800, Vipin Sharma wrote:
> When dirty log is enabled, huge pages are split. Page table's pages
> during the split are allocated based on the current thread NUMA node or
> mempolicy. This causes inefficient page table accesses if underlying
> page is on a different NUMA node
> 
> Allocate page table's pages on the same NUMA node as the underlying huge
> page when dirty log is enabled and huge pages are split.
> 
> The performance gain during the pre-copy phase of live migrations of a
> 416 vCPUs and 11 TiB memory VM  on a 8 node host was seen in the range
> of 130% to 150%.

Can you be more specific about this. "The performance" is vague. I know
it's an internal workload and fully explaining it would be difficult,
but you can give readers a slightly more specific idea of what improved.
e.g.

 When testing with a synthetic write-heavy workload in a 416 vCPU VM on
 an 8 NUMA node host, the throughput increased by 150% from X to Y
 operations per second.

It's also necessary to characterize the improvement relative to the
performance when dirty logging is not enabled. Whithout that information
it would be hard for an unfamiliar reader to understand how useful this
change really is.

For example, let's say the throughput of your workload is 100,000
operations per second before dirty logging is enabled, and that drops
down to 1,000 operations per second after dirty logging is enabled. This
commit could increase that by 150% to 2,500 operations per second, but
that's actually not a very meaningful improvement since, either way,
guest performance is degraded by 95+% during dirty logging.

On the other hand, if performance goes from 100,000 to 30,000 normally,
and this commit increases that 30,000 to 75,000 (150%), that's a much
more meaningful improvement.

> 
> Suggested-by: David Matlack <dmatlack@google.com>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++++----
>  include/linux/kvm_host.h   | 18 ++++++++++++++++++
>  2 files changed, 26 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 4974fa96deff..376b8dceb3f9 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1403,7 +1403,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
>  	return spte_set;
>  }
>  
> -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(int nid, gfp_t gfp)
>  {
>  	struct kvm_mmu_page *sp;
>  
> @@ -1413,7 +1413,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
>  	if (!sp)
>  		return NULL;
>  
> -	sp->spt = (void *)__get_free_page(gfp);
> +	sp->spt = kvm_mmu_get_free_page(nid, gfp);
> +
>  	if (!sp->spt) {
>  		kmem_cache_free(mmu_page_header_cache, sp);
>  		return NULL;
> @@ -1427,6 +1428,9 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  						       bool shared)
>  {
>  	struct kvm_mmu_page *sp;
> +	int nid;
> +
> +	nid = kvm_pfn_to_page_table_nid(spte_to_pfn(iter->old_spte));
>  
>  	/*
>  	 * Since we are allocating while under the MMU lock we have to be
> @@ -1437,7 +1441,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  	 * If this allocation fails we drop the lock and retry with reclaim
>  	 * allowed.
>  	 */
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_NOWAIT | __GFP_ACCOUNT);
>  	if (sp)
>  		return sp;
>  
> @@ -1449,7 +1453,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>  		write_unlock(&kvm->mmu_lock);
>  
>  	iter->yielded = true;
> -	sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> +	sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_KERNEL_ACCOUNT);
>  
>  	if (shared)
>  		read_lock(&kvm->mmu_lock);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index d48064503b88..a262e15ebd19 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1583,6 +1583,24 @@ void kvm_arch_sync_events(struct kvm *kvm);
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
>  
>  struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn);
> +
> +/*
> + * Tells the appropriate NUMA node location of the page table's page based on
> + * pfn it will point to.

I know what you are trying to say but the wording is a bit awkward. e.g.
"Tells" instead of "Returns", "location" is redundant, "page table's
page", etc. Suggest this:

/*
 * Returns an appropriate NUMA node on which to allocate a page table that
 * maps @pfn.
 */

> + *
> + * Return the nid of the page if pfn is valid and backed by a refcounted page,
> + * otherwise, return the nearest memory node for the current CPU.

I would just drop this as it's just restating the code, which is already
very readable.

> + */
> +static inline int kvm_pfn_to_page_table_nid(kvm_pfn_t pfn)
> +{
> +	struct page *page = kvm_pfn_to_refcounted_page(pfn);
> +
> +	if (page)
> +		return page_to_nid(page);
> +	else
> +		return numa_mem_id();
> +}
> +
>  bool kvm_is_zone_device_page(struct page *page);
>  
>  struct kvm_irq_ack_notifier {
> -- 
> 2.39.0.314.g84b9a713c41-goog
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-22  2:34 ` [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{} Vipin Sharma
  2022-12-27 19:09   ` Ben Gardon
@ 2022-12-29 23:08   ` David Matlack
  2022-12-29 23:11     ` David Matlack
  1 sibling, 1 reply; 47+ messages in thread
From: David Matlack @ 2022-12-29 23:08 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Wed, Dec 21, 2022 at 06:34:54PM -0800, Vipin Sharma wrote:
> Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> this cache should allocate memory from. Default initialize to
> NUMA_NO_NODE in all architectures.
> 
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/arm64/kvm/arm.c      |  2 +-
>  arch/arm64/kvm/mmu.c      |  4 +++-
>  arch/mips/kvm/mips.c      |  2 ++
>  arch/riscv/kvm/mmu.c      |  2 +-
>  arch/riscv/kvm/vcpu.c     |  2 +-
>  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
>  include/linux/kvm_host.h  |  6 ++++++
>  include/linux/kvm_types.h |  2 ++
>  8 files changed, 28 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 9c5573bc4614..52a41f4532e2 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  	vcpu->arch.target = -1;
>  	bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
>  
> -	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
>  
>  	/*
>  	 * Default value for the FP state, will be overloaded at load
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 31d7fa4c7c14..bd07155e17fa 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>  {
>  	phys_addr_t addr;
>  	int ret = 0;
> -	struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> +	struct kvm_mmu_memory_cache cache;
>  	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
>  				     KVM_PGTABLE_PROT_R |
>  				     (writable ? KVM_PGTABLE_PROT_W : 0);
>  
> +	INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);

This is not any better than setting cache.node = NUMA_NO_NODE directly.
Yes it's less lines of code, but it's harder to read (what does NULL
mean here?), and every user of kvm_mmu_memory_cache still has to know to
pass NUMA_NO_NODE.

When I originally gave this suggestion, I intended to suggest that
INIT_KVM_MMU_MEMORY_CACHE() provide just default initialization.
Non-default initialization for gfp_zero, gfp_custom, kmem_cache, and
node would remain as they are.

Yes this adds some more lines, but keeps things readable, and doesn't
every initialization site of kvm_mmu_memory_cache to know what to pass
for gfp_zero, node, and kmem_cache. It only needs to set the fields
*it* cares about.

Here's what I mean specifically, based on INIT_LIST_HEAD. I don't think
I got all the kvm_mmu_memory_cache users, but you get the point.


diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9c5573bc4614..0e138dcaf4d4 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -340,6 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.target = -1;
 	bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
 
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache);
 	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
 
 	/*
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 31d7fa4c7c14..f5fd78a4f084 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 {
 	phys_addr_t addr;
 	int ret = 0;
-	struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
+	KVM_MMU_MEMORY_CACHE(cache);
 	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
 	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
 				     KVM_PGTABLE_PROT_R |
 				     (writable ? KVM_PGTABLE_PROT_W : 0);
 
+	cache.gfp_zero = __GFP_ZERO;
+
 	if (is_protected_kvm_enabled())
 		return -EPERM;
 
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 34b57e0be2ef..7915a5a2d104 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -351,10 +351,11 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
 	int ret = 0;
 	unsigned long pfn;
 	phys_addr_t addr, end;
-	struct kvm_mmu_memory_cache pcache = {
-		.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
-		.gfp_zero = __GFP_ZERO,
-	};
+	KVM_MMU_MEMORY_CACHE(pcache);
+
+	pcache.gfp_zero = __GFP_ZERO;
+	if (in_atomic)
+		pcache.gfp_custom = GFP_ATOMIC | __GFP_ACCOUNT;
 
 	end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
 	pfn = __phys_to_pfn(hpa);
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 7c08567097f0..3d73ab3ec9a4 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -161,6 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 
 	/* Mark this VCPU never ran */
 	vcpu->arch.ran_atleast_once = false;
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache);
 	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
 	bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 254bc46234e0..d4cd8e64cc03 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5909,14 +5909,19 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 {
 	int ret;
 
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache);
 	vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
 	vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
 
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache);
 	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
 	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
 
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache);
 	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
 
+	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadowed_info_cache);
+
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
 
@@ -6083,11 +6088,14 @@ int kvm_mmu_init_vm(struct kvm *kvm)
 	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
 	kvm_page_track_register_notifier(kvm, node);
 
+	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache);
 	kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
 	kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
 
+	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache);
 	kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
 
+	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache);
 	kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
 	kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
 
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 76de36e56cdf..eb7ff9afa5c7 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -98,6 +98,17 @@ struct kvm_mmu_memory_cache {
 	int capacity;
 	void **objects;
 };
+
+#define KVM_MMU_MEMORY_CACHE_INIT() (struct kvm_mmu_memory_cache) { \
+}
+
+#define KVM_MMU_MEMORY_CACHE(_name) \
+	struct kvm_mmu_memory_cache _name = KVM_MMU_MEMORY_CACHE_INIT()
+
+static inline void INIT_KVM_MMU_MEMORY_CACHE(struct kvm_mmu_memory_cache *cache)
+{
+	*cache = KVM_MMU_MEMORY_CACHE_INIT();
+}
 #endif
 
 #define HALT_POLL_HIST_COUNT			32

> +
>  	if (is_protected_kvm_enabled())
>  		return -EPERM;
>  
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index a25e0b73ee70..b017c29a9340 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  		     HRTIMER_MODE_REL);
>  	vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
>  
> +	vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> +
>  	/*
>  	 * Allocate space for host mode exception handlers that handle
>  	 * guest mode exits
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 34b57e0be2ef..119de4520cc6 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
>  	phys_addr_t addr, end;
>  	struct kvm_mmu_memory_cache pcache = {
>  		.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> -		.gfp_zero = __GFP_ZERO,
>  	};
>  
> +	INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
>  	end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
>  	pfn = __phys_to_pfn(hpa);
>  
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 7c08567097f0..189b14feb365 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  
>  	/* Mark this VCPU never ran */
>  	vcpu->arch.ran_atleast_once = false;
> -	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
>  	bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
>  
>  	/* Setup ISA features available to VCPU */
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 6f6a10d7a871..23a3b82b2384 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  {
>  	int ret;
>  
> -	vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> -	vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> +				  pte_list_desc_cache, NUMA_NO_NODE);
>  
> -	vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> -	vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> +				  mmu_page_header_cache, NUMA_NO_NODE);
>  
> -	vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> +				  NULL, NUMA_NO_NODE);
>  	spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
>  
>  	vcpu->arch.mmu = &vcpu->arch.root_mmu;
> @@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>  	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
>  	kvm_page_track_register_notifier(kvm, node);
>  
> -	kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> -	kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> +				  mmu_page_header_cache, NUMA_NO_NODE);
>  
> -	kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> +				  NULL, NUMA_NO_NODE);
>  	spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
>  
> -	kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> -	kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> +	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> +				  pte_list_desc_cache, NUMA_NO_NODE);
>  
>  	return 0;
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index a262e15ebd19..719687a37ef7 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
>  /* Max number of entries allowed for each kvm dirty ring */
>  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
>  
> +#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({	\
> +	(_cache)->kmem_cache = _kmem_cache;				\
> +	(_cache)->gfp_zero = __GFP_ZERO;				\
> +	(_cache)->node = _node;						\
> +})
> +
>  #endif
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index 76de36e56cdf..9c70ce95e51f 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
>  	struct kmem_cache *kmem_cache;
>  	int capacity;
>  	void **objects;
> +	/* Node on which memory should be allocated by default */
> +	int node;
>  };
>  #endif
>  
> -- 
> 2.39.0.314.g84b9a713c41-goog
> 

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-29 23:08   ` David Matlack
@ 2022-12-29 23:11     ` David Matlack
  2023-01-03 18:45       ` Vipin Sharma
  0 siblings, 1 reply; 47+ messages in thread
From: David Matlack @ 2022-12-29 23:11 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Thu, Dec 29, 2022 at 3:08 PM David Matlack <dmatlack@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 06:34:54PM -0800, Vipin Sharma wrote:
> > Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> > this cache should allocate memory from. Default initialize to
> > NUMA_NO_NODE in all architectures.
> >
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/arm64/kvm/arm.c      |  2 +-
> >  arch/arm64/kvm/mmu.c      |  4 +++-
> >  arch/mips/kvm/mips.c      |  2 ++
> >  arch/riscv/kvm/mmu.c      |  2 +-
> >  arch/riscv/kvm/vcpu.c     |  2 +-
> >  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
> >  include/linux/kvm_host.h  |  6 ++++++
> >  include/linux/kvm_types.h |  2 ++
> >  8 files changed, 28 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9c5573bc4614..52a41f4532e2 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >       vcpu->arch.target = -1;
> >       bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> >
> > -     vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> >
> >       /*
> >        * Default value for the FP state, will be overloaded at load
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 31d7fa4c7c14..bd07155e17fa 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> >  {
> >       phys_addr_t addr;
> >       int ret = 0;
> > -     struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > +     struct kvm_mmu_memory_cache cache;
> >       struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> >       enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> >                                    KVM_PGTABLE_PROT_R |
> >                                    (writable ? KVM_PGTABLE_PROT_W : 0);
> >
> > +     INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
>
> This is not any better than setting cache.node = NUMA_NO_NODE directly.
> Yes it's less lines of code, but it's harder to read (what does NULL
> mean here?), and every user of kvm_mmu_memory_cache still has to know to
> pass NUMA_NO_NODE.
>
> When I originally gave this suggestion, I intended to suggest that
> INIT_KVM_MMU_MEMORY_CACHE() provide just default initialization.
> Non-default initialization for gfp_zero, gfp_custom, kmem_cache, and
> node would remain as they are.
>
> Yes this adds some more lines, but keeps things readable, and doesn't
> every initialization site of kvm_mmu_memory_cache to know what to pass
> for gfp_zero, node, and kmem_cache. It only needs to set the fields
> *it* cares about.

And to offset the extra lines to call INIT_KVM_MMU_MEMORY_CACHE(), we
could finally invert the meaning of gfp_zero so that caches use
__GFP_ZERO by default. The majority of caches want __GFP_ZERO, so that
should cut down a bunch of lines.

>
> Here's what I mean specifically, based on INIT_LIST_HEAD. I don't think
> I got all the kvm_mmu_memory_cache users, but you get the point.
>
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 9c5573bc4614..0e138dcaf4d4 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -340,6 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>         vcpu->arch.target = -1;
>         bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache);
>         vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
>
>         /*
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 31d7fa4c7c14..f5fd78a4f084 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>  {
>         phys_addr_t addr;
>         int ret = 0;
> -       struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> +       KVM_MMU_MEMORY_CACHE(cache);
>         struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>         enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
>                                      KVM_PGTABLE_PROT_R |
>                                      (writable ? KVM_PGTABLE_PROT_W : 0);
>
> +       cache.gfp_zero = __GFP_ZERO;
> +
>         if (is_protected_kvm_enabled())
>                 return -EPERM;
>
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 34b57e0be2ef..7915a5a2d104 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -351,10 +351,11 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
>         int ret = 0;
>         unsigned long pfn;
>         phys_addr_t addr, end;
> -       struct kvm_mmu_memory_cache pcache = {
> -               .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> -               .gfp_zero = __GFP_ZERO,
> -       };
> +       KVM_MMU_MEMORY_CACHE(pcache);
> +
> +       pcache.gfp_zero = __GFP_ZERO;
> +       if (in_atomic)
> +               pcache.gfp_custom = GFP_ATOMIC | __GFP_ACCOUNT;
>
>         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
>         pfn = __phys_to_pfn(hpa);
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 7c08567097f0..3d73ab3ec9a4 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -161,6 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>
>         /* Mark this VCPU never ran */
>         vcpu->arch.ran_atleast_once = false;
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache);
>         vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
>         bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 254bc46234e0..d4cd8e64cc03 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5909,14 +5909,19 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>  {
>         int ret;
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache);
>         vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
>         vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache);
>         vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
>         vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache);
>         vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadowed_info_cache);
> +
>         vcpu->arch.mmu = &vcpu->arch.root_mmu;
>         vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
>
> @@ -6083,11 +6088,14 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>         node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
>         kvm_page_track_register_notifier(kvm, node);
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache);
>         kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
>         kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache);
>         kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
>
> +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache);
>         kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
>         kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
>
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index 76de36e56cdf..eb7ff9afa5c7 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -98,6 +98,17 @@ struct kvm_mmu_memory_cache {
>         int capacity;
>         void **objects;
>  };
> +
> +#define KVM_MMU_MEMORY_CACHE_INIT() (struct kvm_mmu_memory_cache) { \
> +}
> +
> +#define KVM_MMU_MEMORY_CACHE(_name) \
> +       struct kvm_mmu_memory_cache _name = KVM_MMU_MEMORY_CACHE_INIT()
> +
> +static inline void INIT_KVM_MMU_MEMORY_CACHE(struct kvm_mmu_memory_cache *cache)
> +{
> +       *cache = KVM_MMU_MEMORY_CACHE_INIT();
> +}
>  #endif
>
>  #define HALT_POLL_HIST_COUNT                   32
>
> > +
> >       if (is_protected_kvm_enabled())
> >               return -EPERM;
> >
> > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> > index a25e0b73ee70..b017c29a9340 100644
> > --- a/arch/mips/kvm/mips.c
> > +++ b/arch/mips/kvm/mips.c
> > @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >                    HRTIMER_MODE_REL);
> >       vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
> >
> > +     vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> > +
> >       /*
> >        * Allocate space for host mode exception handlers that handle
> >        * guest mode exits
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index 34b57e0be2ef..119de4520cc6 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> >       phys_addr_t addr, end;
> >       struct kvm_mmu_memory_cache pcache = {
> >               .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> > -             .gfp_zero = __GFP_ZERO,
> >       };
> >
> > +     INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
> >       end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> >       pfn = __phys_to_pfn(hpa);
> >
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 7c08567097f0..189b14feb365 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >
> >       /* Mark this VCPU never ran */
> >       vcpu->arch.ran_atleast_once = false;
> > -     vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> >       bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
> >
> >       /* Setup ISA features available to VCPU */
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 6f6a10d7a871..23a3b82b2384 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >  {
> >       int ret;
> >
> > -     vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> > -     vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> > +                               pte_list_desc_cache, NUMA_NO_NODE);
> >
> > -     vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> > -     vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> > +                               mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -     vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> > +                               NULL, NUMA_NO_NODE);
> >       spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >       vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > @@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> >       node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
> >       kvm_page_track_register_notifier(kvm, node);
> >
> > -     kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> > -     kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> > +                               mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -     kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> > +                               NULL, NUMA_NO_NODE);
> >       spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
> >
> > -     kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> > -     kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> > +     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> > +                               pte_list_desc_cache, NUMA_NO_NODE);
> >
> >       return 0;
> >  }
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index a262e15ebd19..719687a37ef7 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
> >  /* Max number of entries allowed for each kvm dirty ring */
> >  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
> >
> > +#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({     \
> > +     (_cache)->kmem_cache = _kmem_cache;                             \
> > +     (_cache)->gfp_zero = __GFP_ZERO;                                \
> > +     (_cache)->node = _node;                                         \
> > +})
> > +
> >  #endif
> > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > index 76de36e56cdf..9c70ce95e51f 100644
> > --- a/include/linux/kvm_types.h
> > +++ b/include/linux/kvm_types.h
> > @@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
> >       struct kmem_cache *kmem_cache;
> >       int capacity;
> >       void **objects;
> > +     /* Node on which memory should be allocated by default */
> > +     int node;
> >  };
> >  #endif
> >
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware
  2022-12-22  2:34 ` [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware Vipin Sharma
  2022-12-27 19:42   ` Ben Gardon
@ 2022-12-29 23:18   ` David Matlack
  2023-01-03 18:49     ` Vipin Sharma
  1 sibling, 1 reply; 47+ messages in thread
From: David Matlack @ 2022-12-29 23:18 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Wed, Dec 21, 2022 at 06:34:56PM -0800, Vipin Sharma wrote:
> Make split_shadow_page_cache NUMA aware and allocate page table's pages
> during the split based on the underlying physical page's NUMA node.
> 
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 +-
>  arch/x86/kvm/mmu/mmu.c          | 50 ++++++++++++++++++---------------
>  2 files changed, 29 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index b1f319ad6f89..7b3f36ae37a4 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1410,7 +1410,7 @@ struct kvm_arch {
>  	 *
>  	 * Protected by kvm->slots_lock.
>  	 */
> -	struct kvm_mmu_memory_cache split_shadow_page_cache;
> +	struct kvm_mmu_memory_cache split_shadow_page_cache[MAX_NUMNODES];
>  	struct kvm_mmu_memory_cache split_page_header_cache;
>  
>  	/*
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 511c6ef265ee..7454bfc49a51 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6126,7 +6126,7 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>  int kvm_mmu_init_vm(struct kvm *kvm)
>  {
>  	struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
> -	int r;
> +	int r, nid;
>  
>  	INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
>  	INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
> @@ -6145,8 +6145,9 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>  	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
>  				  mmu_page_header_cache, NUMA_NO_NODE);
>  
> -	INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> -				  NULL, NUMA_NO_NODE);
> +	for_each_node(nid)
> +		INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache[nid],
> +					  NULL, NUMA_NO_NODE);
                                                ^^^^^^^^^^^^
						Should this be nid?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-29 18:22       ` Ben Gardon
@ 2023-01-03 17:36         ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2023-01-03 17:36 UTC (permalink / raw)
  To: chenhuacai, aleksandar.qemu.devel
  Cc: seanjc, pbonzini, dmatlack, kvm, linux-kernel, Ben Gardon

On Thu, Dec 29, 2022 at 10:22 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Dec 28, 2022 at 2:08 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > On Tue, Dec 27, 2022 at 11:10 AM Ben Gardon <bgardon@google.com> wrote:
> > >
> > > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > > >
> > > > Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> > > > this cache should allocate memory from. Default initialize to
> > > > NUMA_NO_NODE in all architectures.
> > > >
> > > > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > > > ---
> > > >  arch/arm64/kvm/arm.c      |  2 +-
> > > >  arch/arm64/kvm/mmu.c      |  4 +++-
> > > >  arch/mips/kvm/mips.c      |  2 ++
> > > >  arch/riscv/kvm/mmu.c      |  2 +-
> > > >  arch/riscv/kvm/vcpu.c     |  2 +-
> > > >  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
> > > >  include/linux/kvm_host.h  |  6 ++++++
> > > >  include/linux/kvm_types.h |  2 ++
> > > >  8 files changed, 28 insertions(+), 14 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > index 9c5573bc4614..52a41f4532e2 100644
> > > > --- a/arch/arm64/kvm/arm.c
> > > > +++ b/arch/arm64/kvm/arm.c
> > > > @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > > >         vcpu->arch.target = -1;
> > > >         bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> > > >
> > > > -       vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > > > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> > > >
> > > >         /*
> > > >          * Default value for the FP state, will be overloaded at load
> > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > > index 31d7fa4c7c14..bd07155e17fa 100644
> > > > --- a/arch/arm64/kvm/mmu.c
> > > > +++ b/arch/arm64/kvm/mmu.c
> > > > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> > > >  {
> > > >         phys_addr_t addr;
> > > >         int ret = 0;
> > > > -       struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > > > +       struct kvm_mmu_memory_cache cache;
> > > >         struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> > > >         enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> > > >                                      KVM_PGTABLE_PROT_R |
> > > >                                      (writable ? KVM_PGTABLE_PROT_W : 0);
> > > >
> > > > +       INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
> > > > +
> > > >         if (is_protected_kvm_enabled())
> > > >                 return -EPERM;
> > > >
> > > > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> > > > index a25e0b73ee70..b017c29a9340 100644
> > > > --- a/arch/mips/kvm/mips.c
> > > > +++ b/arch/mips/kvm/mips.c
> > > > @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > > >                      HRTIMER_MODE_REL);
> > > >         vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
> > > >
> > > > +       vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> > > > +
> > >
> > > It looks weird to have MIPS not using the initialization MACRO. Should
> > > it just have a GFP_ZERO parameter?
> >
> > MIPS is not setting GFP_ZERO explicitly before my series, so, I didn't
> > make it GFP_ZERO. I am not sure if MIPS needs it or not, I tried to
> > keep the same functionality in my patch.
> >
> > May be someone from MIPS can tell more about it.
>
> That makes sense, I just don't want to see MIPS get left behind
> because we move the cache init logic to a macro or function. Folks
> might update the init function but forget to update MIPS too.
>

Hi Huacai, Aleksandar,

I have noticed that MIPS doesn't use __GFP_ZERO flag for
mmu_page_cache in KVM. Is it intentional? I was wondering if it will
be useful if I add zero flag for cache in this patch for MIPS? All
other architectures seem to use __GFP_ZERO flag for their caches.

Thanks
Vipin

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-29 21:15       ` David Matlack
@ 2023-01-03 17:38         ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2023-01-03 17:38 UTC (permalink / raw)
  To: David Matlack; +Cc: Ben Gardon, seanjc, pbonzini, kvm, linux-kernel

On Thu, Dec 29, 2022 at 1:15 PM David Matlack <dmatlack@google.com> wrote:
>
> On Wed, Dec 28, 2022 at 02:07:49PM -0800, Vipin Sharma wrote:
> > On Tue, Dec 27, 2022 at 10:37 AM Ben Gardon <bgardon@google.com> wrote:
> > > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > > >
> > > > Tested this change by running dirty_log_perf_test while dropping cache
> > > > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> > > > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> > > > logs from kvm_mmu_memory_cache_alloc(), which is expected.
> > >
> > > Oh, that's not a good thing. I don't think we want to be hitting those
> > > warnings. For one, kernel warnings should not be expected behavior,
> > > probably for many reasons, but at least because Syzbot will find it.
> > > In this particular case, we don't want to hit that because in that
> > > case we'll try to do a GFP_ATOMIC, which can fail, and if it fails,
> > > we'll BUG:
> > >
> > > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
> > > {
> > >         void *p;
> > >
> > >         if (WARN_ON(!mc->nobjs))
> > >                 p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT);
> > >         else
> > >                 p = mc->objects[--mc->nobjs];
> > >         BUG_ON(!p);
> > >         return p;
> > > }
> > >
> > > Perhaps the risk of actually panicking is small, but it probably
> > > indicates that we need better error handling around failed allocations
> > > from the cache.
> > > Or, the slightly less elegant approach might be to just hold the cache
> > > lock around the cache topup and use of pages from the cache, but
> > > adding better error handling would probably be cleaner.
> >
> > I was counting on the fact that shrinker will ideally run only in
> > extreme cases, i.e. host is running on low memory. So, this WARN_ON
> > will only be rarely used. I was not aware of Syzbot, it seems like it
> > will be a concern if it does this kind of testing.
>
> In an extreme low-memory situation, forcing vCPUS to do GFP_ATOMIC
> allocations to handle page faults is risky. Plus it's a waste of time to
> free that memory since it's just going to get immediately reallocated.
>
> >
> > I thought about keeping a mutex, taking it during topup and releasing
> > it after the whole operation is done but I stopped it as the duration
> > of holding mutex will be long and might block the memory shrinker
> > longer. I am not sure though, if this is a valid concern.
>
> Use mutex_trylock() to skip any vCPUs that are currently handling page
> faults.

oh yeah! Thanks.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-29 21:54   ` David Matlack
@ 2023-01-03 18:01     ` Vipin Sharma
  2023-01-04  0:25       ` Vipin Sharma
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2023-01-03 18:01 UTC (permalink / raw)
  To: David Matlack; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Thu, Dec 29, 2022 at 1:55 PM David Matlack <dmatlack@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 06:34:49PM -0800, Vipin Sharma wrote:
> > mmu_shrink_scan() is very disruptive to VMs. It picks the first
> > VM in the vm_list, zaps the oldest page which is most likely an upper
> > level SPTEs and most like to be reused. Prior to TDP MMU, this is even
> > more disruptive in nested VMs case, considering L1 SPTEs will be the
> > oldest even though most of the entries are for L2 SPTEs.
> >
> > As discussed in
> > https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
> > shrinker logic has not be very useful in actually keeping VMs performant
> > and reducing memory usage.
> >
> > Change mmu_shrink_scan() to free pages from the vCPU's shadow page
> > cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
> > VM's performance should not be affected.
>
> Can you split this commit up? e.g. First drop the old shrinking logic in
> one commit (but leave the shrinking infrastructure in place). Then a
> commit to make the shrinker free the per-vCPU shadow page caches. And
> then perhaps another to make the shrinker free the per-VM shadow page
> cache used for eager splitting.
>

Sounds good, I will separate it in two parts, one for dropping old
logic, one for adding per vcpu shadow page caches. Patch 3 is enabling
shrinkerto free per-VM shadow page.

> >
> > This also allows to change cache capacities without worrying too much
> > about high memory usage in cache.
> >
> > Tested this change by running dirty_log_perf_test while dropping cache
> > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> > logs from kvm_mmu_memory_cache_alloc(), which is expected.
> >
> > Suggested-by: Sean Christopherson <seanjc@google.com>
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |   5 +
> >  arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
> >  arch/x86/kvm/mmu/mmu_internal.h |   2 +
> >  arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
> >  include/linux/kvm_host.h        |   1 +
> >  virt/kvm/kvm_main.c             |  11 ++-
> >  6 files changed, 114 insertions(+), 71 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index aa4eb8cfcd7e..89cc809e4a00 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
> >       struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
> >       struct kvm_mmu_memory_cache mmu_page_header_cache;
> >
> > +     /*
> > +      * Protects change in size of mmu_shadow_page_cache cache.
> > +      */
> > +     spinlock_t mmu_shadow_page_cache_lock;
> > +
> >       /*
> >        * QEMU userspace and the guest each have their own FPU state.
> >        * In vcpu_run, we switch between the user and guest FPU contexts.
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 254bc46234e0..157417e1cb6e 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
> >
> >  static struct kmem_cache *pte_list_desc_cache;
> >  struct kmem_cache *mmu_page_header_cache;
> > -static struct percpu_counter kvm_total_used_mmu_pages;
> > +/*
> > + * Total number of unused pages in MMU shadow page cache.
> > + */
> > +static struct percpu_counter kvm_total_unused_mmu_pages;
> >
> >  static void mmu_spte_set(u64 *sptep, u64 spte);
> >
> > @@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> >       }
> >  }
> >
> > +static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > +                                  spinlock_t *cache_lock)
> > +{
> > +     int orig_nobjs;
> > +     int r;
> > +
> > +     spin_lock(cache_lock);
> > +     orig_nobjs = cache->nobjs;
> > +     r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
> > +     if (orig_nobjs != cache->nobjs)
> > +             percpu_counter_add(&kvm_total_unused_mmu_pages,
> > +                                (cache->nobjs - orig_nobjs));
> > +     spin_unlock(cache_lock);
> > +     return r;
> > +}
> > +
> >  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >  {
> >       int r;
> > @@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >                                      1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> >       if (r)
> >               return r;
> > -     r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > -                                    PT64_ROOT_MAX_LEVEL);
> > +     r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > +                                   &vcpu->arch.mmu_shadow_page_cache_lock);
> >       if (r)
> >               return r;
> >       if (maybe_indirect) {
> > @@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >                                         PT64_ROOT_MAX_LEVEL);
> >  }
> >
> > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > +                                  spinlock_t *cache_lock)
> > +{
> > +     int orig_nobjs;
> > +
> > +     spin_lock(cache_lock);
> > +     orig_nobjs = cache->nobjs;
> > +     kvm_mmu_free_memory_cache(cache);
> > +     if (orig_nobjs)
> > +             percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> > +
> > +     spin_unlock(cache_lock);
> > +}
>
> It would be nice to avoid adding these wrapper functions.
>
> Once you add a mutex to protect the caches from being freed while vCPUs
> are in the middle of a page fault you can drop the spin lock. After that
> the only reason to have these wrappers is to update
> kvm_total_unused_mmu_pages.
>
> Do we really need kvm_total_unused_mmu_pages? Why not just dynamically
> calculate the number of of unused pages in mmu_shrink_count()? Or just
> estimate the count, e.g. num_vcpus * KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE?
> Or have per-VM or per-vCPU shrinkers to avoid needing to do any
> aggregation?
>

I think we can drop this, by default we can return num_kvms *
num_vcpus * nodes * KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE

Whenever mmu_shrink_scan() is called if there are no pages to free
then return SHRINK_STOP which will stop any subsequent calls during
that time.


> > +
> >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> > -     kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > +     mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > +                              &vcpu->arch.mmu_shadow_page_cache_lock);
> >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
>
> mmu_shadowed_info_cache can be freed by the shrinker as well.
>
> >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> >  }
> > @@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
> >  }
> >  #endif
> >
> > -/*
> > - * This value is the sum of all of the kvm instances's
> > - * kvm->arch.n_used_mmu_pages values.  We need a global,
> > - * aggregate version in order to make the slab shrinker
> > - * faster
> > - */
> > -static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
> > -{
> > -     kvm->arch.n_used_mmu_pages += nr;
> > -     percpu_counter_add(&kvm_total_used_mmu_pages, nr);
> > -}
> > -
> >  static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> >  {
> > -     kvm_mod_used_mmu_pages(kvm, +1);
> > +     kvm->arch.n_used_mmu_pages++;
> >       kvm_account_pgtable_pages((void *)sp->spt, +1);
> >  }
> >
> >  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> >  {
> > -     kvm_mod_used_mmu_pages(kvm, -1);
> > +     kvm->arch.n_used_mmu_pages--;
> >       kvm_account_pgtable_pages((void *)sp->spt, -1);
> >  }
> >
> > @@ -2150,8 +2172,31 @@ struct shadow_page_caches {
> >       struct kvm_mmu_memory_cache *page_header_cache;
> >       struct kvm_mmu_memory_cache *shadow_page_cache;
> >       struct kvm_mmu_memory_cache *shadowed_info_cache;
> > +     /*
> > +      * Protects change in size of shadow_page_cache cache.
> > +      */
> > +     spinlock_t *shadow_page_cache_lock;
> >  };
> >
> > +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > +                                 spinlock_t *cache_lock)
> > +{
> > +     int orig_nobjs;
> > +     void *page;
> > +
> > +     if (!cache_lock) {
> > +             spin_lock(cache_lock);
> > +             orig_nobjs = shadow_page_cache->nobjs;
> > +     }
> > +     page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
> > +     if (!cache_lock) {
> > +             if (orig_nobjs)
> > +                     percpu_counter_dec(&kvm_total_unused_mmu_pages);
> > +             spin_unlock(cache_lock);
> > +     }
> > +     return page;
> > +}
> > +
> >  static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
> >                                                     struct shadow_page_caches *caches,
> >                                                     gfn_t gfn,
> > @@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
> >       struct kvm_mmu_page *sp;
> >
> >       sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
> > -     sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
> > +     sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
> > +                                             caches->shadow_page_cache_lock);
> >       if (!role.direct)
> >               sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
> >
> > @@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
> >               .page_header_cache = &vcpu->arch.mmu_page_header_cache,
> >               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
> >               .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> > +             .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
> >       };
> >
> >       return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> > @@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >       vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> >
> >       vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > +     spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >       vcpu->arch.mmu = &vcpu->arch.root_mmu;
> >       vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> > @@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
> >               kvm_tdp_mmu_zap_invalidated_roots(kvm);
> >  }
> >
> > -static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> > -{
> > -     return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> > -}
> > -
> >  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> >                       struct kvm_memory_slot *slot,
> >                       struct kvm_page_track_notifier_node *node)
> > @@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
> >       /* Direct SPs do not require a shadowed_info_cache. */
> >       caches.page_header_cache = &kvm->arch.split_page_header_cache;
> >       caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> > +     caches.shadow_page_cache_lock = NULL;
> >
> >       /* Safe to pass NULL for vCPU since requesting a direct SP. */
> >       return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> > @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> >  static unsigned long
> >  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  {
> > -     struct kvm *kvm;
> > -     int nr_to_scan = sc->nr_to_scan;
> > +     struct kvm_mmu_memory_cache *cache;
> > +     struct kvm *kvm, *first_kvm = NULL;
> >       unsigned long freed = 0;
> > +     /* spinlock for memory cache */
> > +     spinlock_t *cache_lock;
> > +     struct kvm_vcpu *vcpu;
> > +     unsigned long i;
> >
> >       mutex_lock(&kvm_lock);
> >
> >       list_for_each_entry(kvm, &vm_list, vm_list) {
> > -             int idx;
> > -             LIST_HEAD(invalid_list);
> > -
> > -             /*
> > -              * Never scan more than sc->nr_to_scan VM instances.
> > -              * Will not hit this condition practically since we do not try
> > -              * to shrink more than one VM and it is very unlikely to see
> > -              * !n_used_mmu_pages so many times.
> > -              */
> > -             if (!nr_to_scan--)
> > +             if (first_kvm == kvm)
> >                       break;
> > -             /*
> > -              * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> > -              * here. We may skip a VM instance errorneosly, but we do not
> > -              * want to shrink a VM that only started to populate its MMU
> > -              * anyway.
> > -              */
> > -             if (!kvm->arch.n_used_mmu_pages &&
> > -                 !kvm_has_zapped_obsolete_pages(kvm))
> > -                     continue;
> > +             if (!first_kvm)
> > +                     first_kvm = kvm;
> > +             list_move_tail(&kvm->vm_list, &vm_list);
> >
> > -             idx = srcu_read_lock(&kvm->srcu);
> > -             write_lock(&kvm->mmu_lock);
> > +             kvm_for_each_vcpu(i, vcpu, kvm) {
>
> What protects this from racing with vCPU creation/deletion?
>
> > +                     cache = &vcpu->arch.mmu_shadow_page_cache;
> > +                     cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock;
> > +                     if (READ_ONCE(cache->nobjs)) {
> > +                             spin_lock(cache_lock);
> > +                             freed += kvm_mmu_empty_memory_cache(cache);
> > +                             spin_unlock(cache_lock);
> > +                     }
>
> What about freeing kvm->arch.split_shadow_page_cache as well?
>
> >
> > -             if (kvm_has_zapped_obsolete_pages(kvm)) {
> > -                     kvm_mmu_commit_zap_page(kvm,
> > -                           &kvm->arch.zapped_obsolete_pages);
> > -                     goto unlock;
> >               }
> >
> > -             freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
> > -
> > -unlock:
> > -             write_unlock(&kvm->mmu_lock);
> > -             srcu_read_unlock(&kvm->srcu, idx);
> > -
> > -             /*
> > -              * unfair on small ones
> > -              * per-vm shrinkers cry out
> > -              * sadness comes quickly
> > -              */
> > -             list_move_tail(&kvm->vm_list, &vm_list);
> > -             break;
> > +             if (freed >= sc->nr_to_scan)
> > +                     break;
> >       }
> >
> > +     if (freed)
> > +             percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
> >       mutex_unlock(&kvm_lock);
> > +     percpu_counter_sync(&kvm_total_unused_mmu_pages);
> >       return freed;
> >  }
> >
> >  static unsigned long
> >  mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
> >  {
> > -     return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> > +     return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);
> >  }
> >
> >  static struct shrinker mmu_shrinker = {
> > @@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
> >       if (!mmu_page_header_cache)
> >               goto out;
> >
> > -     if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> > +     if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
> >               goto out;
> >
> >       ret = register_shrinker(&mmu_shrinker, "x86-mmu");
> > @@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
> >       return 0;
> >
> >  out_shrinker:
> > -     percpu_counter_destroy(&kvm_total_used_mmu_pages);
> > +     percpu_counter_destroy(&kvm_total_unused_mmu_pages);
> >  out:
> >       mmu_destroy_caches();
> >       return ret;
> > @@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
> >  void kvm_mmu_vendor_module_exit(void)
> >  {
> >       mmu_destroy_caches();
> > -     percpu_counter_destroy(&kvm_total_used_mmu_pages);
> > +     percpu_counter_destroy(&kvm_total_unused_mmu_pages);
> >       unregister_shrinker(&mmu_shrinker);
> >  }
> >
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index ac00bfbf32f6..c2a342028b6a 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> >  void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> >  void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> >
> > +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > +                                 spinlock_t *cache_lock);
> >  #endif /* __KVM_X86_MMU_INTERNAL_H */
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 764f7c87286f..4974fa96deff 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> >       struct kvm_mmu_page *sp;
> >
> >       sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> > -     sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> > +     sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> > +                                             &vcpu->arch.mmu_shadow_page_cache_lock);
> >
> >       return sp;
> >  }
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 01aad8b74162..efd9b38ea9a2 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
> >  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
> >  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
> >  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
> > +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
> >  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
> >  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> >  #endif
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 13e88297f999..f2d762878b97 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
> >       return mc->nobjs;
> >  }
> >
> > -void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
> >  {
> > +     int freed = mc->nobjs;
> > +
> >       while (mc->nobjs) {
> >               if (mc->kmem_cache)
> >                       kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
> > @@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> >                       free_page((unsigned long)mc->objects[--mc->nobjs]);
> >       }
> >
> > -     kvfree(mc->objects);
> > +     return freed;
> > +}
> >
> > +void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > +{
> > +     kvm_mmu_empty_memory_cache(mc);
> > +     kvfree(mc->objects);
> >       mc->objects = NULL;
> >       mc->capacity = 0;
> >  }
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split
  2022-12-29 22:30   ` David Matlack
@ 2023-01-03 18:26     ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2023-01-03 18:26 UTC (permalink / raw)
  To: David Matlack; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Thu, Dec 29, 2022 at 2:30 PM David Matlack <dmatlack@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 06:34:53PM -0800, Vipin Sharma wrote:
> > When dirty log is enabled, huge pages are split. Page table's pages
> > during the split are allocated based on the current thread NUMA node or
> > mempolicy. This causes inefficient page table accesses if underlying
> > page is on a different NUMA node
> >
> > Allocate page table's pages on the same NUMA node as the underlying huge
> > page when dirty log is enabled and huge pages are split.
> >
> > The performance gain during the pre-copy phase of live migrations of a
> > 416 vCPUs and 11 TiB memory VM  on a 8 node host was seen in the range
> > of 130% to 150%.
>
> Can you be more specific about this. "The performance" is vague. I know
> it's an internal workload and fully explaining it would be difficult,
> but you can give readers a slightly more specific idea of what improved.
> e.g.
>
>  When testing with a synthetic write-heavy workload in a 416 vCPU VM on
>  an 8 NUMA node host, the throughput increased by 150% from X to Y
>  operations per second.
>
> It's also necessary to characterize the improvement relative to the
> performance when dirty logging is not enabled. Whithout that information
> it would be hard for an unfamiliar reader to understand how useful this
> change really is.
>
> For example, let's say the throughput of your workload is 100,000
> operations per second before dirty logging is enabled, and that drops
> down to 1,000 operations per second after dirty logging is enabled. This
> commit could increase that by 150% to 2,500 operations per second, but
> that's actually not a very meaningful improvement since, either way,
> guest performance is degraded by 95+% during dirty logging.
>
> On the other hand, if performance goes from 100,000 to 30,000 normally,
> and this commit increases that 30,000 to 75,000 (150%), that's a much
> more meaningful improvement.
>

Yeah, I will provide more insight in the next version.

> >
> > Suggested-by: David Matlack <dmatlack@google.com>
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++++----
> >  include/linux/kvm_host.h   | 18 ++++++++++++++++++
> >  2 files changed, 26 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 4974fa96deff..376b8dceb3f9 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -1403,7 +1403,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
> >       return spte_set;
> >  }
> >
> > -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> > +static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(int nid, gfp_t gfp)
> >  {
> >       struct kvm_mmu_page *sp;
> >
> > @@ -1413,7 +1413,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
> >       if (!sp)
> >               return NULL;
> >
> > -     sp->spt = (void *)__get_free_page(gfp);
> > +     sp->spt = kvm_mmu_get_free_page(nid, gfp);
> > +
> >       if (!sp->spt) {
> >               kmem_cache_free(mmu_page_header_cache, sp);
> >               return NULL;
> > @@ -1427,6 +1428,9 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >                                                      bool shared)
> >  {
> >       struct kvm_mmu_page *sp;
> > +     int nid;
> > +
> > +     nid = kvm_pfn_to_page_table_nid(spte_to_pfn(iter->old_spte));
> >
> >       /*
> >        * Since we are allocating while under the MMU lock we have to be
> > @@ -1437,7 +1441,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >        * If this allocation fails we drop the lock and retry with reclaim
> >        * allowed.
> >        */
> > -     sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
> > +     sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_NOWAIT | __GFP_ACCOUNT);
> >       if (sp)
> >               return sp;
> >
> > @@ -1449,7 +1453,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >               write_unlock(&kvm->mmu_lock);
> >
> >       iter->yielded = true;
> > -     sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
> > +     sp = __tdp_mmu_alloc_sp_for_split(nid, GFP_KERNEL_ACCOUNT);
> >
> >       if (shared)
> >               read_lock(&kvm->mmu_lock);
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index d48064503b88..a262e15ebd19 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1583,6 +1583,24 @@ void kvm_arch_sync_events(struct kvm *kvm);
> >  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
> >
> >  struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn);
> > +
> > +/*
> > + * Tells the appropriate NUMA node location of the page table's page based on
> > + * pfn it will point to.
>
> I know what you are trying to say but the wording is a bit awkward. e.g.
> "Tells" instead of "Returns", "location" is redundant, "page table's
> page", etc. Suggest this:
>
> /*
>  * Returns an appropriate NUMA node on which to allocate a page table that
>  * maps @pfn.
>  */
>
> > + *
> > + * Return the nid of the page if pfn is valid and backed by a refcounted page,
> > + * otherwise, return the nearest memory node for the current CPU.
>
> I would just drop this as it's just restating the code, which is already
> very readable.
>

Okay.

> > + */
> > +static inline int kvm_pfn_to_page_table_nid(kvm_pfn_t pfn)
> > +{
> > +     struct page *page = kvm_pfn_to_refcounted_page(pfn);
> > +
> > +     if (page)
> > +             return page_to_nid(page);
> > +     else
> > +             return numa_mem_id();
> > +}
> > +
> >  bool kvm_is_zone_device_page(struct page *page);
> >
> >  struct kvm_irq_ack_notifier {
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2022-12-29 23:11     ` David Matlack
@ 2023-01-03 18:45       ` Vipin Sharma
  2023-01-03 18:55         ` David Matlack
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2023-01-03 18:45 UTC (permalink / raw)
  To: David Matlack; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Thu, Dec 29, 2022 at 3:12 PM David Matlack <dmatlack@google.com> wrote:
>
> On Thu, Dec 29, 2022 at 3:08 PM David Matlack <dmatlack@google.com> wrote:
> >
> > On Wed, Dec 21, 2022 at 06:34:54PM -0800, Vipin Sharma wrote:
> > > Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> > > this cache should allocate memory from. Default initialize to
> > > NUMA_NO_NODE in all architectures.
> > >
> > > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > > ---
> > >  arch/arm64/kvm/arm.c      |  2 +-
> > >  arch/arm64/kvm/mmu.c      |  4 +++-
> > >  arch/mips/kvm/mips.c      |  2 ++
> > >  arch/riscv/kvm/mmu.c      |  2 +-
> > >  arch/riscv/kvm/vcpu.c     |  2 +-
> > >  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
> > >  include/linux/kvm_host.h  |  6 ++++++
> > >  include/linux/kvm_types.h |  2 ++
> > >  8 files changed, 28 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 9c5573bc4614..52a41f4532e2 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > >       vcpu->arch.target = -1;
> > >       bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> > >
> > > -     vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> > >
> > >       /*
> > >        * Default value for the FP state, will be overloaded at load
> > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > index 31d7fa4c7c14..bd07155e17fa 100644
> > > --- a/arch/arm64/kvm/mmu.c
> > > +++ b/arch/arm64/kvm/mmu.c
> > > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> > >  {
> > >       phys_addr_t addr;
> > >       int ret = 0;
> > > -     struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > > +     struct kvm_mmu_memory_cache cache;
> > >       struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> > >       enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> > >                                    KVM_PGTABLE_PROT_R |
> > >                                    (writable ? KVM_PGTABLE_PROT_W : 0);
> > >
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
> >
> > This is not any better than setting cache.node = NUMA_NO_NODE directly.
> > Yes it's less lines of code, but it's harder to read (what does NULL
> > mean here?), and every user of kvm_mmu_memory_cache still has to know to
> > pass NUMA_NO_NODE.
> >
> > When I originally gave this suggestion, I intended to suggest that
> > INIT_KVM_MMU_MEMORY_CACHE() provide just default initialization.
> > Non-default initialization for gfp_zero, gfp_custom, kmem_cache, and
> > node would remain as they are.
> >
> > Yes this adds some more lines, but keeps things readable, and doesn't
> > every initialization site of kvm_mmu_memory_cache to know what to pass
> > for gfp_zero, node, and kmem_cache. It only needs to set the fields
> > *it* cares about.
>
> And to offset the extra lines to call INIT_KVM_MMU_MEMORY_CACHE(), we
> could finally invert the meaning of gfp_zero so that caches use
> __GFP_ZERO by default. The majority of caches want __GFP_ZERO, so that
> should cut down a bunch of lines.
>

Can you clarify what you mean by invert?

Caches which don't want __GFP_ZERO will explicitly set gfp_zero to 0.
Is this what you intend?


> >
> > Here's what I mean specifically, based on INIT_LIST_HEAD. I don't think
> > I got all the kvm_mmu_memory_cache users, but you get the point.
> >
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9c5573bc4614..0e138dcaf4d4 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -340,6 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >         vcpu->arch.target = -1;
> >         bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache);
> >         vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> >
> >         /*
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 31d7fa4c7c14..f5fd78a4f084 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> >  {
> >         phys_addr_t addr;
> >         int ret = 0;
> > -       struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > +       KVM_MMU_MEMORY_CACHE(cache);
> >         struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> >         enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> >                                      KVM_PGTABLE_PROT_R |
> >                                      (writable ? KVM_PGTABLE_PROT_W : 0);
> >
> > +       cache.gfp_zero = __GFP_ZERO;
> > +
> >         if (is_protected_kvm_enabled())
> >                 return -EPERM;
> >
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index 34b57e0be2ef..7915a5a2d104 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -351,10 +351,11 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> >         int ret = 0;
> >         unsigned long pfn;
> >         phys_addr_t addr, end;
> > -       struct kvm_mmu_memory_cache pcache = {
> > -               .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> > -               .gfp_zero = __GFP_ZERO,
> > -       };
> > +       KVM_MMU_MEMORY_CACHE(pcache);
> > +
> > +       pcache.gfp_zero = __GFP_ZERO;
> > +       if (in_atomic)
> > +               pcache.gfp_custom = GFP_ATOMIC | __GFP_ACCOUNT;
> >
> >         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> >         pfn = __phys_to_pfn(hpa);
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 7c08567097f0..3d73ab3ec9a4 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -161,6 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >
> >         /* Mark this VCPU never ran */
> >         vcpu->arch.ran_atleast_once = false;
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache);
> >         vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> >         bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 254bc46234e0..d4cd8e64cc03 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -5909,14 +5909,19 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> >  {
> >         int ret;
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache);
> >         vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> >         vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache);
> >         vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> >         vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache);
> >         vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadowed_info_cache);
> > +
> >         vcpu->arch.mmu = &vcpu->arch.root_mmu;
> >         vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> >
> > @@ -6083,11 +6088,14 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> >         node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
> >         kvm_page_track_register_notifier(kvm, node);
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache);
> >         kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> >         kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache);
> >         kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> >
> > +       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache);
> >         kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> >         kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> >
> > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > index 76de36e56cdf..eb7ff9afa5c7 100644
> > --- a/include/linux/kvm_types.h
> > +++ b/include/linux/kvm_types.h
> > @@ -98,6 +98,17 @@ struct kvm_mmu_memory_cache {
> >         int capacity;
> >         void **objects;
> >  };
> > +
> > +#define KVM_MMU_MEMORY_CACHE_INIT() (struct kvm_mmu_memory_cache) { \
> > +}
> > +
> > +#define KVM_MMU_MEMORY_CACHE(_name) \
> > +       struct kvm_mmu_memory_cache _name = KVM_MMU_MEMORY_CACHE_INIT()
> > +
> > +static inline void INIT_KVM_MMU_MEMORY_CACHE(struct kvm_mmu_memory_cache *cache)
> > +{
> > +       *cache = KVM_MMU_MEMORY_CACHE_INIT();
> > +}
> >  #endif
> >
> >  #define HALT_POLL_HIST_COUNT                   32
> >
> > > +
> > >       if (is_protected_kvm_enabled())
> > >               return -EPERM;
> > >
> > > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> > > index a25e0b73ee70..b017c29a9340 100644
> > > --- a/arch/mips/kvm/mips.c
> > > +++ b/arch/mips/kvm/mips.c
> > > @@ -304,6 +304,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > >                    HRTIMER_MODE_REL);
> > >       vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
> > >
> > > +     vcpu->arch.mmu_page_cache.node = NUMA_NO_NODE;
> > > +
> > >       /*
> > >        * Allocate space for host mode exception handlers that handle
> > >        * guest mode exits
> > > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > > index 34b57e0be2ef..119de4520cc6 100644
> > > --- a/arch/riscv/kvm/mmu.c
> > > +++ b/arch/riscv/kvm/mmu.c
> > > @@ -353,9 +353,9 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> > >       phys_addr_t addr, end;
> > >       struct kvm_mmu_memory_cache pcache = {
> > >               .gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0,
> > > -             .gfp_zero = __GFP_ZERO,
> > >       };
> > >
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&pcache, NULL, NUMA_NO_NODE);
> > >       end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> > >       pfn = __phys_to_pfn(hpa);
> > >
> > > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > > index 7c08567097f0..189b14feb365 100644
> > > --- a/arch/riscv/kvm/vcpu.c
> > > +++ b/arch/riscv/kvm/vcpu.c
> > > @@ -161,7 +161,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > >
> > >       /* Mark this VCPU never ran */
> > >       vcpu->arch.ran_atleast_once = false;
> > > -     vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> > >       bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
> > >
> > >       /* Setup ISA features available to VCPU */
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 6f6a10d7a871..23a3b82b2384 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -5954,13 +5954,14 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> > >  {
> > >       int ret;
> > >
> > > -     vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache;
> > > -     vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_pte_list_desc_cache,
> > > +                               pte_list_desc_cache, NUMA_NO_NODE);
> > >
> > > -     vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache;
> > > -     vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_header_cache,
> > > +                               mmu_page_header_cache, NUMA_NO_NODE);
> > >
> > > -     vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_shadow_page_cache,
> > > +                               NULL, NUMA_NO_NODE);
> > >       spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> > >
> > >       vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > > @@ -6124,14 +6125,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> > >       node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
> > >       kvm_page_track_register_notifier(kvm, node);
> > >
> > > -     kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> > > -     kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> > > +                               mmu_page_header_cache, NUMA_NO_NODE);
> > >
> > > -     kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> > > +                               NULL, NUMA_NO_NODE);
> > >       spin_lock_init(&kvm->arch.split_shadow_page_cache_lock);
> > >
> > > -     kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> > > -     kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> > > +     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache,
> > > +                               pte_list_desc_cache, NUMA_NO_NODE);
> > >
> > >       return 0;
> > >  }
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index a262e15ebd19..719687a37ef7 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -2302,4 +2302,10 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
> > >  /* Max number of entries allowed for each kvm dirty ring */
> > >  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
> > >
> > > +#define INIT_KVM_MMU_MEMORY_CACHE(_cache, _kmem_cache, _node) ({     \
> > > +     (_cache)->kmem_cache = _kmem_cache;                             \
> > > +     (_cache)->gfp_zero = __GFP_ZERO;                                \
> > > +     (_cache)->node = _node;                                         \
> > > +})
> > > +
> > >  #endif
> > > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > > index 76de36e56cdf..9c70ce95e51f 100644
> > > --- a/include/linux/kvm_types.h
> > > +++ b/include/linux/kvm_types.h
> > > @@ -97,6 +97,8 @@ struct kvm_mmu_memory_cache {
> > >       struct kmem_cache *kmem_cache;
> > >       int capacity;
> > >       void **objects;
> > > +     /* Node on which memory should be allocated by default */
> > > +     int node;
> > >  };
> > >  #endif
> > >
> > > --
> > > 2.39.0.314.g84b9a713c41-goog
> > >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware
  2022-12-29 23:18   ` David Matlack
@ 2023-01-03 18:49     ` Vipin Sharma
  0 siblings, 0 replies; 47+ messages in thread
From: Vipin Sharma @ 2023-01-03 18:49 UTC (permalink / raw)
  To: David Matlack; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Thu, Dec 29, 2022 at 3:18 PM David Matlack <dmatlack@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 06:34:56PM -0800, Vipin Sharma wrote:
> > Make split_shadow_page_cache NUMA aware and allocate page table's pages
> > during the split based on the underlying physical page's NUMA node.
> >
> > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  2 +-
> >  arch/x86/kvm/mmu/mmu.c          | 50 ++++++++++++++++++---------------
> >  2 files changed, 29 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index b1f319ad6f89..7b3f36ae37a4 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1410,7 +1410,7 @@ struct kvm_arch {
> >        *
> >        * Protected by kvm->slots_lock.
> >        */
> > -     struct kvm_mmu_memory_cache split_shadow_page_cache;
> > +     struct kvm_mmu_memory_cache split_shadow_page_cache[MAX_NUMNODES];
> >       struct kvm_mmu_memory_cache split_page_header_cache;
> >
> >       /*
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 511c6ef265ee..7454bfc49a51 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -6126,7 +6126,7 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> >  int kvm_mmu_init_vm(struct kvm *kvm)
> >  {
> >       struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
> > -     int r;
> > +     int r, nid;
> >
> >       INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
> >       INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
> > @@ -6145,8 +6145,9 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> >       INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache,
> >                                 mmu_page_header_cache, NUMA_NO_NODE);
> >
> > -     INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache,
> > -                               NULL, NUMA_NO_NODE);
> > +     for_each_node(nid)
> > +             INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache[nid],
> > +                                       NULL, NUMA_NO_NODE);
>                                                 ^^^^^^^^^^^^
>                                                 Should this be nid?
Yes, I will fix it in the next version. Thanks

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{}
  2023-01-03 18:45       ` Vipin Sharma
@ 2023-01-03 18:55         ` David Matlack
  0 siblings, 0 replies; 47+ messages in thread
From: David Matlack @ 2023-01-03 18:55 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, kvm, linux-kernel

On Tue, Jan 3, 2023 at 10:46 AM Vipin Sharma <vipinsh@google.com> wrote:
>
> On Thu, Dec 29, 2022 at 3:12 PM David Matlack <dmatlack@google.com> wrote:
> >
> > On Thu, Dec 29, 2022 at 3:08 PM David Matlack <dmatlack@google.com> wrote:
> > >
> > > On Wed, Dec 21, 2022 at 06:34:54PM -0800, Vipin Sharma wrote:
> > > > Add 'node' variable in kvm_mmu_memory_cache{} to denote which NUMA node
> > > > this cache should allocate memory from. Default initialize to
> > > > NUMA_NO_NODE in all architectures.
> > > >
> > > > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > > > ---
> > > >  arch/arm64/kvm/arm.c      |  2 +-
> > > >  arch/arm64/kvm/mmu.c      |  4 +++-
> > > >  arch/mips/kvm/mips.c      |  2 ++
> > > >  arch/riscv/kvm/mmu.c      |  2 +-
> > > >  arch/riscv/kvm/vcpu.c     |  2 +-
> > > >  arch/x86/kvm/mmu/mmu.c    | 22 ++++++++++++----------
> > > >  include/linux/kvm_host.h  |  6 ++++++
> > > >  include/linux/kvm_types.h |  2 ++
> > > >  8 files changed, 28 insertions(+), 14 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > index 9c5573bc4614..52a41f4532e2 100644
> > > > --- a/arch/arm64/kvm/arm.c
> > > > +++ b/arch/arm64/kvm/arm.c
> > > > @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> > > >       vcpu->arch.target = -1;
> > > >       bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> > > >
> > > > -     vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> > > > +     INIT_KVM_MMU_MEMORY_CACHE(&vcpu->arch.mmu_page_cache, NULL, NUMA_NO_NODE);
> > > >
> > > >       /*
> > > >        * Default value for the FP state, will be overloaded at load
> > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > > index 31d7fa4c7c14..bd07155e17fa 100644
> > > > --- a/arch/arm64/kvm/mmu.c
> > > > +++ b/arch/arm64/kvm/mmu.c
> > > > @@ -894,12 +894,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> > > >  {
> > > >       phys_addr_t addr;
> > > >       int ret = 0;
> > > > -     struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
> > > > +     struct kvm_mmu_memory_cache cache;
> > > >       struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> > > >       enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
> > > >                                    KVM_PGTABLE_PROT_R |
> > > >                                    (writable ? KVM_PGTABLE_PROT_W : 0);
> > > >
> > > > +     INIT_KVM_MMU_MEMORY_CACHE(&cache, NULL, NUMA_NO_NODE);
> > >
> > > This is not any better than setting cache.node = NUMA_NO_NODE directly.
> > > Yes it's less lines of code, but it's harder to read (what does NULL
> > > mean here?), and every user of kvm_mmu_memory_cache still has to know to
> > > pass NUMA_NO_NODE.
> > >
> > > When I originally gave this suggestion, I intended to suggest that
> > > INIT_KVM_MMU_MEMORY_CACHE() provide just default initialization.
> > > Non-default initialization for gfp_zero, gfp_custom, kmem_cache, and
> > > node would remain as they are.
> > >
> > > Yes this adds some more lines, but keeps things readable, and doesn't
> > > every initialization site of kvm_mmu_memory_cache to know what to pass
> > > for gfp_zero, node, and kmem_cache. It only needs to set the fields
> > > *it* cares about.
> >
> > And to offset the extra lines to call INIT_KVM_MMU_MEMORY_CACHE(), we
> > could finally invert the meaning of gfp_zero so that caches use
> > __GFP_ZERO by default. The majority of caches want __GFP_ZERO, so that
> > should cut down a bunch of lines.
> >
>
> Can you clarify what you mean by invert?
>
> Caches which don't want __GFP_ZERO will explicitly set gfp_zero to 0.
> Is this what you intend?

When I wrote that comment I was thinking you can change `gfp_t
gfp_zero` to e.g. `bool skip_gfp_zero` so that the default initialized
value (false/0) means "use __GFP_ZERO".

However, that's silly once we have INIT_KVM_MMU_MEMORY_CACHE(). We can
do what you suggest: set gfp_zero to __GFP_ZERO in
INIT_KVM_MMU_MEMORY_CACHE() and then explicitly set it to 0 in caches
that don't need __GFP_ZERO.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
  2022-12-27 18:37   ` Ben Gardon
  2022-12-29 21:54   ` David Matlack
@ 2023-01-03 19:32   ` Mingwei Zhang
  2023-01-04  1:00     ` Vipin Sharma
  2023-01-16  4:14   ` kernel test robot
  3 siblings, 1 reply; 47+ messages in thread
From: Mingwei Zhang @ 2023-01-03 19:32 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, dmatlack, kvm, linux-kernel

On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> mmu_shrink_scan() is very disruptive to VMs. It picks the first
> VM in the vm_list, zaps the oldest page which is most likely an upper
> level SPTEs and most like to be reused. Prior to TDP MMU, this is even
> more disruptive in nested VMs case, considering L1 SPTEs will be the
> oldest even though most of the entries are for L2 SPTEs.
>
> As discussed in
> https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
> shrinker logic has not be very useful in actually keeping VMs performant
> and reducing memory usage.
>
> Change mmu_shrink_scan() to free pages from the vCPU's shadow page
> cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
> VM's performance should not be affected.
>
> This also allows to change cache capacities without worrying too much
> about high memory usage in cache.
>
> Tested this change by running dirty_log_perf_test while dropping cache
> via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> logs from kvm_mmu_memory_cache_alloc(), which is expected.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Vipin Sharma <vipinsh@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   5 +
>  arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
>  arch/x86/kvm/mmu/mmu_internal.h |   2 +
>  arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
>  include/linux/kvm_host.h        |   1 +
>  virt/kvm/kvm_main.c             |  11 ++-
>  6 files changed, 114 insertions(+), 71 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index aa4eb8cfcd7e..89cc809e4a00 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
>         struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
>         struct kvm_mmu_memory_cache mmu_page_header_cache;
>
> +       /*
> +        * Protects change in size of mmu_shadow_page_cache cache.
> +        */
> +       spinlock_t mmu_shadow_page_cache_lock;
> +
>         /*
>          * QEMU userspace and the guest each have their own FPU state.
>          * In vcpu_run, we switch between the user and guest FPU contexts.
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 254bc46234e0..157417e1cb6e 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
>
>  static struct kmem_cache *pte_list_desc_cache;
>  struct kmem_cache *mmu_page_header_cache;
> -static struct percpu_counter kvm_total_used_mmu_pages;
> +/*
> + * Total number of unused pages in MMU shadow page cache.
> + */
> +static struct percpu_counter kvm_total_unused_mmu_pages;
>
>  static void mmu_spte_set(u64 *sptep, u64 spte);
>
> @@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
>         }
>  }
>
> +static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> +                                    spinlock_t *cache_lock)
> +{
> +       int orig_nobjs;
> +       int r;
> +
> +       spin_lock(cache_lock);
> +       orig_nobjs = cache->nobjs;
> +       r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
> +       if (orig_nobjs != cache->nobjs)
> +               percpu_counter_add(&kvm_total_unused_mmu_pages,
> +                                  (cache->nobjs - orig_nobjs));
> +       spin_unlock(cache_lock);
> +       return r;
> +}
> +
>  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>  {
>         int r;
> @@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>                                        1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
>         if (r)
>                 return r;
> -       r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> -                                      PT64_ROOT_MAX_LEVEL);
> +       r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> +                                     &vcpu->arch.mmu_shadow_page_cache_lock);
>         if (r)
>                 return r;
>         if (maybe_indirect) {
> @@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
>                                           PT64_ROOT_MAX_LEVEL);
>  }
>
> +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> +                                    spinlock_t *cache_lock)
> +{
> +       int orig_nobjs;
> +
> +       spin_lock(cache_lock);
> +       orig_nobjs = cache->nobjs;
> +       kvm_mmu_free_memory_cache(cache);
> +       if (orig_nobjs)
> +               percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> +
> +       spin_unlock(cache_lock);
> +}

I think the mmu_cache allocation and deallocation may cause the usage
of GFP_ATOMIC (as observed by other reviewers as well). Adding a new
lock would definitely sound like a plan, but I think it might affect
the performance. Alternatively, I am wondering if we could use a
mmu_cache_sequence similar to mmu_notifier_seq to help avoid the
concurrency?

Similar to mmu_notifier_seq, mmu_cache_sequence should be protected by
mmu write lock. In the page fault path, each vcpu has to collect a
snapshot of  mmu_cache_sequence before calling into
mmu_topup_memory_caches() and check the value again when holding the
mmu lock. If the value is different, that means the mmu_shrinker has
removed the cache objects and because of that, the vcpu should retry.


> +
>  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
>  {
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> -       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> +       mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> +                                &vcpu->arch.mmu_shadow_page_cache_lock);
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
>         kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
>  }
> @@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
>  }
>  #endif
>
> -/*
> - * This value is the sum of all of the kvm instances's
> - * kvm->arch.n_used_mmu_pages values.  We need a global,
> - * aggregate version in order to make the slab shrinker
> - * faster
> - */
> -static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
> -{
> -       kvm->arch.n_used_mmu_pages += nr;
> -       percpu_counter_add(&kvm_total_used_mmu_pages, nr);
> -}
> -
>  static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -       kvm_mod_used_mmu_pages(kvm, +1);
> +       kvm->arch.n_used_mmu_pages++;
>         kvm_account_pgtable_pages((void *)sp->spt, +1);
>  }
>
>  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
> -       kvm_mod_used_mmu_pages(kvm, -1);
> +       kvm->arch.n_used_mmu_pages--;
>         kvm_account_pgtable_pages((void *)sp->spt, -1);
>  }
>
> @@ -2150,8 +2172,31 @@ struct shadow_page_caches {
>         struct kvm_mmu_memory_cache *page_header_cache;
>         struct kvm_mmu_memory_cache *shadow_page_cache;
>         struct kvm_mmu_memory_cache *shadowed_info_cache;
> +       /*
> +        * Protects change in size of shadow_page_cache cache.
> +        */
> +       spinlock_t *shadow_page_cache_lock;
>  };
>
> +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> +                                   spinlock_t *cache_lock)
> +{
> +       int orig_nobjs;
> +       void *page;
> +
> +       if (!cache_lock) {
> +               spin_lock(cache_lock);
> +               orig_nobjs = shadow_page_cache->nobjs;
> +       }
> +       page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
> +       if (!cache_lock) {
> +               if (orig_nobjs)
> +                       percpu_counter_dec(&kvm_total_unused_mmu_pages);
> +               spin_unlock(cache_lock);
> +       }
> +       return page;
> +}
> +
>  static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
>                                                       struct shadow_page_caches *caches,
>                                                       gfn_t gfn,
> @@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
>         struct kvm_mmu_page *sp;
>
>         sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
> -       sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
> +       sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
> +                                               caches->shadow_page_cache_lock);
>         if (!role.direct)
>                 sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
>
> @@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
>                 .page_header_cache = &vcpu->arch.mmu_page_header_cache,
>                 .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
>                 .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> +               .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
>         };
>
>         return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> @@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
>         vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
>
>         vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +       spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
>
>         vcpu->arch.mmu = &vcpu->arch.root_mmu;
>         vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> @@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
>                 kvm_tdp_mmu_zap_invalidated_roots(kvm);
>  }
>
> -static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> -{
> -       return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> -}
> -
>  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>                         struct kvm_memory_slot *slot,
>                         struct kvm_page_track_notifier_node *node)
> @@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
>         /* Direct SPs do not require a shadowed_info_cache. */
>         caches.page_header_cache = &kvm->arch.split_page_header_cache;
>         caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> +       caches.shadow_page_cache_lock = NULL;
>
>         /* Safe to pass NULL for vCPU since requesting a direct SP. */
>         return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
>  static unsigned long
>  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  {
> -       struct kvm *kvm;
> -       int nr_to_scan = sc->nr_to_scan;
> +       struct kvm_mmu_memory_cache *cache;
> +       struct kvm *kvm, *first_kvm = NULL;
>         unsigned long freed = 0;
> +       /* spinlock for memory cache */
> +       spinlock_t *cache_lock;
> +       struct kvm_vcpu *vcpu;
> +       unsigned long i;
>
>         mutex_lock(&kvm_lock);
>
>         list_for_each_entry(kvm, &vm_list, vm_list) {
> -               int idx;
> -               LIST_HEAD(invalid_list);
> -
> -               /*
> -                * Never scan more than sc->nr_to_scan VM instances.
> -                * Will not hit this condition practically since we do not try
> -                * to shrink more than one VM and it is very unlikely to see
> -                * !n_used_mmu_pages so many times.
> -                */
> -               if (!nr_to_scan--)
> +               if (first_kvm == kvm)
>                         break;
> -               /*
> -                * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> -                * here. We may skip a VM instance errorneosly, but we do not
> -                * want to shrink a VM that only started to populate its MMU
> -                * anyway.
> -                */
> -               if (!kvm->arch.n_used_mmu_pages &&
> -                   !kvm_has_zapped_obsolete_pages(kvm))
> -                       continue;
> +               if (!first_kvm)
> +                       first_kvm = kvm;
> +               list_move_tail(&kvm->vm_list, &vm_list);
>
> -               idx = srcu_read_lock(&kvm->srcu);
> -               write_lock(&kvm->mmu_lock);
> +               kvm_for_each_vcpu(i, vcpu, kvm) {
> +                       cache = &vcpu->arch.mmu_shadow_page_cache;
> +                       cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock;
> +                       if (READ_ONCE(cache->nobjs)) {
> +                               spin_lock(cache_lock);
> +                               freed += kvm_mmu_empty_memory_cache(cache);
> +                               spin_unlock(cache_lock);
> +                       }
>
> -               if (kvm_has_zapped_obsolete_pages(kvm)) {
> -                       kvm_mmu_commit_zap_page(kvm,
> -                             &kvm->arch.zapped_obsolete_pages);
> -                       goto unlock;
>                 }
>
> -               freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
> -
> -unlock:
> -               write_unlock(&kvm->mmu_lock);
> -               srcu_read_unlock(&kvm->srcu, idx);
> -
> -               /*
> -                * unfair on small ones
> -                * per-vm shrinkers cry out
> -                * sadness comes quickly
> -                */
> -               list_move_tail(&kvm->vm_list, &vm_list);
> -               break;
> +               if (freed >= sc->nr_to_scan)
> +                       break;
>         }
>
> +       if (freed)
> +               percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
>         mutex_unlock(&kvm_lock);
> +       percpu_counter_sync(&kvm_total_unused_mmu_pages);
>         return freed;
>  }
>
>  static unsigned long
>  mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
>  {
> -       return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> +       return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);
>  }
>
>  static struct shrinker mmu_shrinker = {
> @@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
>         if (!mmu_page_header_cache)
>                 goto out;
>
> -       if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> +       if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
>                 goto out;
>
>         ret = register_shrinker(&mmu_shrinker, "x86-mmu");
> @@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
>         return 0;
>
>  out_shrinker:
> -       percpu_counter_destroy(&kvm_total_used_mmu_pages);
> +       percpu_counter_destroy(&kvm_total_unused_mmu_pages);
>  out:
>         mmu_destroy_caches();
>         return ret;
> @@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
>  void kvm_mmu_vendor_module_exit(void)
>  {
>         mmu_destroy_caches();
> -       percpu_counter_destroy(&kvm_total_used_mmu_pages);
> +       percpu_counter_destroy(&kvm_total_unused_mmu_pages);
>         unregister_shrinker(&mmu_shrinker);
>  }
>
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index ac00bfbf32f6..c2a342028b6a 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>  void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>  void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>
> +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> +                                   spinlock_t *cache_lock);
>  #endif /* __KVM_X86_MMU_INTERNAL_H */
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 764f7c87286f..4974fa96deff 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
>         struct kvm_mmu_page *sp;
>
>         sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> -       sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> +       sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> +                                               &vcpu->arch.mmu_shadow_page_cache_lock);
>
>         return sp;
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 01aad8b74162..efd9b38ea9a2 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
>  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
>  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
> +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>  #endif
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 13e88297f999..f2d762878b97 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
>         return mc->nobjs;
>  }
>
> -void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
>  {
> +       int freed = mc->nobjs;
> +
>         while (mc->nobjs) {
>                 if (mc->kmem_cache)
>                         kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
> @@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
>                         free_page((unsigned long)mc->objects[--mc->nobjs]);
>         }
>
> -       kvfree(mc->objects);
> +       return freed;
> +}
>
> +void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> +{
> +       kvm_mmu_empty_memory_cache(mc);
> +       kvfree(mc->objects);
>         mc->objects = NULL;
>         mc->capacity = 0;
>  }
> --
> 2.39.0.314.g84b9a713c41-goog
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2023-01-03 18:01     ` Vipin Sharma
@ 2023-01-04  0:25       ` Vipin Sharma
  2023-01-18 17:43         ` Sean Christopherson
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2023-01-04  0:25 UTC (permalink / raw)
  To: David Matlack, seanjc, pbonzini; +Cc: bgardon, kvm, linux-kernel

On Tue, Jan 3, 2023 at 10:01 AM Vipin Sharma <vipinsh@google.com> wrote:
>
> On Thu, Dec 29, 2022 at 1:55 PM David Matlack <dmatlack@google.com> wrote:
> >
> > On Wed, Dec 21, 2022 at 06:34:49PM -0800, Vipin Sharma wrote:
> > > mmu_shrink_scan() is very disruptive to VMs. It picks the first
> > > VM in the vm_list, zaps the oldest page which is most likely an upper
> > > level SPTEs and most like to be reused. Prior to TDP MMU, this is even
> > > more disruptive in nested VMs case, considering L1 SPTEs will be the
> > > oldest even though most of the entries are for L2 SPTEs.
> > >
> > > As discussed in
> > > https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
> > > shrinker logic has not be very useful in actually keeping VMs performant
> > > and reducing memory usage.
> > >
> > > Change mmu_shrink_scan() to free pages from the vCPU's shadow page
> > > cache.  Freeing pages from cache doesn't cause vCPU exits, therefore, a
> > > VM's performance should not be affected.
> >
> > Can you split this commit up? e.g. First drop the old shrinking logic in
> > one commit (but leave the shrinking infrastructure in place). Then a
> > commit to make the shrinker free the per-vCPU shadow page caches. And
> > then perhaps another to make the shrinker free the per-VM shadow page
> > cache used for eager splitting.
> >
>
> Sounds good, I will separate it in two parts, one for dropping old
> logic, one for adding per vcpu shadow page caches. Patch 3 is enabling
> shrinkerto free per-VM shadow page.
>
> > >
> > > This also allows to change cache capacities without worrying too much
> > > about high memory usage in cache.
> > >
> > > Tested this change by running dirty_log_perf_test while dropping cache
> > > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> > > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> > > logs from kvm_mmu_memory_cache_alloc(), which is expected.
> > >
> > > Suggested-by: Sean Christopherson <seanjc@google.com>
> > > Signed-off-by: Vipin Sharma <vipinsh@google.com>
> > > ---
> > >  arch/x86/include/asm/kvm_host.h |   5 +
> > >  arch/x86/kvm/mmu/mmu.c          | 163 +++++++++++++++++++-------------
> > >  arch/x86/kvm/mmu/mmu_internal.h |   2 +
> > >  arch/x86/kvm/mmu/tdp_mmu.c      |   3 +-
> > >  include/linux/kvm_host.h        |   1 +
> > >  virt/kvm/kvm_main.c             |  11 ++-
> > >  6 files changed, 114 insertions(+), 71 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > > index aa4eb8cfcd7e..89cc809e4a00 100644
> > > --- a/arch/x86/include/asm/kvm_host.h
> > > +++ b/arch/x86/include/asm/kvm_host.h
> > > @@ -786,6 +786,11 @@ struct kvm_vcpu_arch {
> > >       struct kvm_mmu_memory_cache mmu_shadowed_info_cache;
> > >       struct kvm_mmu_memory_cache mmu_page_header_cache;
> > >
> > > +     /*
> > > +      * Protects change in size of mmu_shadow_page_cache cache.
> > > +      */
> > > +     spinlock_t mmu_shadow_page_cache_lock;
> > > +
> > >       /*
> > >        * QEMU userspace and the guest each have their own FPU state.
> > >        * In vcpu_run, we switch between the user and guest FPU contexts.
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 254bc46234e0..157417e1cb6e 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -164,7 +164,10 @@ struct kvm_shadow_walk_iterator {
> > >
> > >  static struct kmem_cache *pte_list_desc_cache;
> > >  struct kmem_cache *mmu_page_header_cache;
> > > -static struct percpu_counter kvm_total_used_mmu_pages;
> > > +/*
> > > + * Total number of unused pages in MMU shadow page cache.
> > > + */
> > > +static struct percpu_counter kvm_total_unused_mmu_pages;
> > >
> > >  static void mmu_spte_set(u64 *sptep, u64 spte);
> > >
> > > @@ -655,6 +658,22 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> > >       }
> > >  }
> > >
> > > +static int mmu_topup_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > > +                                  spinlock_t *cache_lock)
> > > +{
> > > +     int orig_nobjs;
> > > +     int r;
> > > +
> > > +     spin_lock(cache_lock);
> > > +     orig_nobjs = cache->nobjs;
> > > +     r = kvm_mmu_topup_memory_cache(cache, PT64_ROOT_MAX_LEVEL);
> > > +     if (orig_nobjs != cache->nobjs)
> > > +             percpu_counter_add(&kvm_total_unused_mmu_pages,
> > > +                                (cache->nobjs - orig_nobjs));
> > > +     spin_unlock(cache_lock);
> > > +     return r;
> > > +}
> > > +
> > >  static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> > >  {
> > >       int r;
> > > @@ -664,8 +683,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> > >                                      1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
> > >       if (r)
> > >               return r;
> > > -     r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > > -                                    PT64_ROOT_MAX_LEVEL);
> > > +     r = mmu_topup_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > > +                                   &vcpu->arch.mmu_shadow_page_cache_lock);
> > >       if (r)
> > >               return r;
> > >       if (maybe_indirect) {
> > > @@ -678,10 +697,25 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> > >                                         PT64_ROOT_MAX_LEVEL);
> > >  }
> > >
> > > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > > +                                  spinlock_t *cache_lock)
> > > +{
> > > +     int orig_nobjs;
> > > +
> > > +     spin_lock(cache_lock);
> > > +     orig_nobjs = cache->nobjs;
> > > +     kvm_mmu_free_memory_cache(cache);
> > > +     if (orig_nobjs)
> > > +             percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> > > +
> > > +     spin_unlock(cache_lock);
> > > +}
> >
> > It would be nice to avoid adding these wrapper functions.
> >
> > Once you add a mutex to protect the caches from being freed while vCPUs
> > are in the middle of a page fault you can drop the spin lock. After that
> > the only reason to have these wrappers is to update
> > kvm_total_unused_mmu_pages.
> >
> > Do we really need kvm_total_unused_mmu_pages? Why not just dynamically
> > calculate the number of of unused pages in mmu_shrink_count()? Or just
> > estimate the count, e.g. num_vcpus * KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE?
> > Or have per-VM or per-vCPU shrinkers to avoid needing to do any
> > aggregation?
> >
>
> I think we can drop this, by default we can return num_kvms *
> num_vcpus * nodes * KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
>
> Whenever mmu_shrink_scan() is called if there are no pages to free
> then return SHRINK_STOP which will stop any subsequent calls during
> that time.
>
>
> > > +
> > >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> > >  {
> > >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> > > -     kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > > +     mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache,
> > > +                              &vcpu->arch.mmu_shadow_page_cache_lock);
> > >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache);
> >
> > mmu_shadowed_info_cache can be freed by the shrinker as well.
> >

Yes, I can do that as well.

> > >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache);
> > >  }
> > > @@ -1693,27 +1727,15 @@ static int is_empty_shadow_page(u64 *spt)
> > >  }
> > >  #endif
> > >
> > > -/*
> > > - * This value is the sum of all of the kvm instances's
> > > - * kvm->arch.n_used_mmu_pages values.  We need a global,
> > > - * aggregate version in order to make the slab shrinker
> > > - * faster
> > > - */
> > > -static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
> > > -{
> > > -     kvm->arch.n_used_mmu_pages += nr;
> > > -     percpu_counter_add(&kvm_total_used_mmu_pages, nr);
> > > -}
> > > -
> > >  static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> > >  {
> > > -     kvm_mod_used_mmu_pages(kvm, +1);
> > > +     kvm->arch.n_used_mmu_pages++;
> > >       kvm_account_pgtable_pages((void *)sp->spt, +1);
> > >  }
> > >
> > >  static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> > >  {
> > > -     kvm_mod_used_mmu_pages(kvm, -1);
> > > +     kvm->arch.n_used_mmu_pages--;
> > >       kvm_account_pgtable_pages((void *)sp->spt, -1);
> > >  }
> > >
> > > @@ -2150,8 +2172,31 @@ struct shadow_page_caches {
> > >       struct kvm_mmu_memory_cache *page_header_cache;
> > >       struct kvm_mmu_memory_cache *shadow_page_cache;
> > >       struct kvm_mmu_memory_cache *shadowed_info_cache;
> > > +     /*
> > > +      * Protects change in size of shadow_page_cache cache.
> > > +      */
> > > +     spinlock_t *shadow_page_cache_lock;
> > >  };
> > >
> > > +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > > +                                 spinlock_t *cache_lock)
> > > +{
> > > +     int orig_nobjs;
> > > +     void *page;
> > > +
> > > +     if (!cache_lock) {
> > > +             spin_lock(cache_lock);
> > > +             orig_nobjs = shadow_page_cache->nobjs;
> > > +     }
> > > +     page = kvm_mmu_memory_cache_alloc(shadow_page_cache);
> > > +     if (!cache_lock) {
> > > +             if (orig_nobjs)
> > > +                     percpu_counter_dec(&kvm_total_unused_mmu_pages);
> > > +             spin_unlock(cache_lock);
> > > +     }
> > > +     return page;
> > > +}
> > > +
> > >  static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
> > >                                                     struct shadow_page_caches *caches,
> > >                                                     gfn_t gfn,
> > > @@ -2161,7 +2206,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
> > >       struct kvm_mmu_page *sp;
> > >
> > >       sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
> > > -     sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
> > > +     sp->spt = kvm_mmu_sp_memory_cache_alloc(caches->shadow_page_cache,
> > > +                                             caches->shadow_page_cache_lock);
> > >       if (!role.direct)
> > >               sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
> > >
> > > @@ -2218,6 +2264,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
> > >               .page_header_cache = &vcpu->arch.mmu_page_header_cache,
> > >               .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
> > >               .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache,
> > > +             .shadow_page_cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock
> > >       };
> > >
> > >       return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
> > > @@ -5916,6 +5963,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
> > >       vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO;
> > >
> > >       vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO;
> > > +     spin_lock_init(&vcpu->arch.mmu_shadow_page_cache_lock);
> > >
> > >       vcpu->arch.mmu = &vcpu->arch.root_mmu;
> > >       vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> > > @@ -6051,11 +6099,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
> > >               kvm_tdp_mmu_zap_invalidated_roots(kvm);
> > >  }
> > >
> > > -static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
> > > -{
> > > -     return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
> > > -}
> > > -
> > >  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> > >                       struct kvm_memory_slot *slot,
> > >                       struct kvm_page_track_notifier_node *node)
> > > @@ -6277,6 +6320,7 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
> > >       /* Direct SPs do not require a shadowed_info_cache. */
> > >       caches.page_header_cache = &kvm->arch.split_page_header_cache;
> > >       caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> > > +     caches.shadow_page_cache_lock = NULL;
> > >
> > >       /* Safe to pass NULL for vCPU since requesting a direct SP. */
> > >       return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> > > @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> > >  static unsigned long
> > >  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  {
> > > -     struct kvm *kvm;
> > > -     int nr_to_scan = sc->nr_to_scan;
> > > +     struct kvm_mmu_memory_cache *cache;
> > > +     struct kvm *kvm, *first_kvm = NULL;
> > >       unsigned long freed = 0;
> > > +     /* spinlock for memory cache */
> > > +     spinlock_t *cache_lock;
> > > +     struct kvm_vcpu *vcpu;
> > > +     unsigned long i;
> > >
> > >       mutex_lock(&kvm_lock);
> > >
> > >       list_for_each_entry(kvm, &vm_list, vm_list) {
> > > -             int idx;
> > > -             LIST_HEAD(invalid_list);
> > > -
> > > -             /*
> > > -              * Never scan more than sc->nr_to_scan VM instances.
> > > -              * Will not hit this condition practically since we do not try
> > > -              * to shrink more than one VM and it is very unlikely to see
> > > -              * !n_used_mmu_pages so many times.
> > > -              */
> > > -             if (!nr_to_scan--)
> > > +             if (first_kvm == kvm)
> > >                       break;
> > > -             /*
> > > -              * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> > > -              * here. We may skip a VM instance errorneosly, but we do not
> > > -              * want to shrink a VM that only started to populate its MMU
> > > -              * anyway.
> > > -              */
> > > -             if (!kvm->arch.n_used_mmu_pages &&
> > > -                 !kvm_has_zapped_obsolete_pages(kvm))
> > > -                     continue;
> > > +             if (!first_kvm)
> > > +                     first_kvm = kvm;
> > > +             list_move_tail(&kvm->vm_list, &vm_list);
> > >
> > > -             idx = srcu_read_lock(&kvm->srcu);
> > > -             write_lock(&kvm->mmu_lock);
> > > +             kvm_for_each_vcpu(i, vcpu, kvm) {
> >
> > What protects this from racing with vCPU creation/deletion?
> >

vCPU deletion:
We take kvm_lock in mmu_shrink_scan(), the same lock is taken in
kvm_destroy_vm() to remove a vm from vm_list. So, once we are
iterating vm_list we will not see any VM removal which will means no
vcpu removal.

I didn't find any other code for vCPU deletion except failures during
VM and VCPU set up. A VM is only added to vm_list after successful
creation.

vCPU creation:
I think it will work.

kvm_vm_ioctl_create_vcpus() initializes the vcpu, adds it to
kvm->vcpu_array which is of the type xarray and is managed by RCU.
After this online_vcpus is incremented. So, kvm_for_each_vcpu() which
uses RCU to read entries, if it sees incremented online_vcpus value
then it will also sees all of the vcpu initialization.

@Sean, Paolo

Is the above explanation correct, kvm_for_each_vcpu() is safe without any lock?

> > > +                     cache = &vcpu->arch.mmu_shadow_page_cache;
> > > +                     cache_lock = &vcpu->arch.mmu_shadow_page_cache_lock;
> > > +                     if (READ_ONCE(cache->nobjs)) {
> > > +                             spin_lock(cache_lock);
> > > +                             freed += kvm_mmu_empty_memory_cache(cache);
> > > +                             spin_unlock(cache_lock);
> > > +                     }
> >
> > What about freeing kvm->arch.split_shadow_page_cache as well?
> >

I am doing this in patch 3.

> > >
> > > -             if (kvm_has_zapped_obsolete_pages(kvm)) {
> > > -                     kvm_mmu_commit_zap_page(kvm,
> > > -                           &kvm->arch.zapped_obsolete_pages);
> > > -                     goto unlock;
> > >               }
> > >
> > > -             freed = kvm_mmu_zap_oldest_mmu_pages(kvm, sc->nr_to_scan);
> > > -
> > > -unlock:
> > > -             write_unlock(&kvm->mmu_lock);
> > > -             srcu_read_unlock(&kvm->srcu, idx);
> > > -
> > > -             /*
> > > -              * unfair on small ones
> > > -              * per-vm shrinkers cry out
> > > -              * sadness comes quickly
> > > -              */
> > > -             list_move_tail(&kvm->vm_list, &vm_list);
> > > -             break;
> > > +             if (freed >= sc->nr_to_scan)
> > > +                     break;
> > >       }
> > >
> > > +     if (freed)
> > > +             percpu_counter_sub(&kvm_total_unused_mmu_pages, freed);
> > >       mutex_unlock(&kvm_lock);
> > > +     percpu_counter_sync(&kvm_total_unused_mmu_pages);
> > >       return freed;
> > >  }
> > >
> > >  static unsigned long
> > >  mmu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
> > >  {
> > > -     return percpu_counter_read_positive(&kvm_total_used_mmu_pages);
> > > +     return percpu_counter_sum_positive(&kvm_total_unused_mmu_pages);
> > >  }
> > >
> > >  static struct shrinker mmu_shrinker = {
> > > @@ -6820,7 +6847,7 @@ int kvm_mmu_vendor_module_init(void)
> > >       if (!mmu_page_header_cache)
> > >               goto out;
> > >
> > > -     if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> > > +     if (percpu_counter_init(&kvm_total_unused_mmu_pages, 0, GFP_KERNEL))
> > >               goto out;
> > >
> > >       ret = register_shrinker(&mmu_shrinker, "x86-mmu");
> > > @@ -6830,7 +6857,7 @@ int kvm_mmu_vendor_module_init(void)
> > >       return 0;
> > >
> > >  out_shrinker:
> > > -     percpu_counter_destroy(&kvm_total_used_mmu_pages);
> > > +     percpu_counter_destroy(&kvm_total_unused_mmu_pages);
> > >  out:
> > >       mmu_destroy_caches();
> > >       return ret;
> > > @@ -6847,7 +6874,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
> > >  void kvm_mmu_vendor_module_exit(void)
> > >  {
> > >       mmu_destroy_caches();
> > > -     percpu_counter_destroy(&kvm_total_used_mmu_pages);
> > > +     percpu_counter_destroy(&kvm_total_unused_mmu_pages);
> > >       unregister_shrinker(&mmu_shrinker);
> > >  }
> > >
> > > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > > index ac00bfbf32f6..c2a342028b6a 100644
> > > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > > @@ -325,4 +325,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> > >  void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> > >  void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> > >
> > > +void *kvm_mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > > +                                 spinlock_t *cache_lock);
> > >  #endif /* __KVM_X86_MMU_INTERNAL_H */
> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > > index 764f7c87286f..4974fa96deff 100644
> > > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > > @@ -264,7 +264,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> > >       struct kvm_mmu_page *sp;
> > >
> > >       sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> > > -     sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> > > +     sp->spt = kvm_mmu_sp_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache,
> > > +                                             &vcpu->arch.mmu_shadow_page_cache_lock);
> > >
> > >       return sp;
> > >  }
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index 01aad8b74162..efd9b38ea9a2 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -1362,6 +1362,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
> > >  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
> > >  int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);
> > >  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
> > > +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc);
> > >  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
> > >  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> > >  #endif
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index 13e88297f999..f2d762878b97 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -438,8 +438,10 @@ int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc)
> > >       return mc->nobjs;
> > >  }
> > >
> > > -void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > > +int kvm_mmu_empty_memory_cache(struct kvm_mmu_memory_cache *mc)
> > >  {
> > > +     int freed = mc->nobjs;
> > > +
> > >       while (mc->nobjs) {
> > >               if (mc->kmem_cache)
> > >                       kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]);
> > > @@ -447,8 +449,13 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > >                       free_page((unsigned long)mc->objects[--mc->nobjs]);
> > >       }
> > >
> > > -     kvfree(mc->objects);
> > > +     return freed;
> > > +}
> > >
> > > +void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
> > > +{
> > > +     kvm_mmu_empty_memory_cache(mc);
> > > +     kvfree(mc->objects);
> > >       mc->objects = NULL;
> > >       mc->capacity = 0;
> > >  }
> > > --
> > > 2.39.0.314.g84b9a713c41-goog
> > >

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2023-01-03 19:32   ` Mingwei Zhang
@ 2023-01-04  1:00     ` Vipin Sharma
  2023-01-04  6:29       ` Mingwei Zhang
  0 siblings, 1 reply; 47+ messages in thread
From: Vipin Sharma @ 2023-01-04  1:00 UTC (permalink / raw)
  To: Mingwei Zhang; +Cc: seanjc, pbonzini, bgardon, dmatlack, kvm, linux-kernel

On Tue, Jan 3, 2023 at 11:32 AM Mingwei Zhang <mizhang@google.com> wrote:
>
> On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > +                                    spinlock_t *cache_lock)
> > +{
> > +       int orig_nobjs;
> > +
> > +       spin_lock(cache_lock);
> > +       orig_nobjs = cache->nobjs;
> > +       kvm_mmu_free_memory_cache(cache);
> > +       if (orig_nobjs)
> > +               percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> > +
> > +       spin_unlock(cache_lock);
> > +}
>
> I think the mmu_cache allocation and deallocation may cause the usage
> of GFP_ATOMIC (as observed by other reviewers as well). Adding a new
> lock would definitely sound like a plan, but I think it might affect
> the performance. Alternatively, I am wondering if we could use a
> mmu_cache_sequence similar to mmu_notifier_seq to help avoid the
> concurrency?
>

Can you explain more about the performance impact? Each vcpu will have
its own mutex. So, only contention will be with the mmu_shrinker. This
shrinker will use mutex_try_lock() which will not block to wait for
the lock, it will just pass on to the next vcpu. While shrinker is
holding the lock, vcpu will be blocked in the page fault path but I
think it should not have a huge impact considering it will execute
rarely and for a small time.

> Similar to mmu_notifier_seq, mmu_cache_sequence should be protected by
> mmu write lock. In the page fault path, each vcpu has to collect a
> snapshot of  mmu_cache_sequence before calling into
> mmu_topup_memory_caches() and check the value again when holding the
> mmu lock. If the value is different, that means the mmu_shrinker has
> removed the cache objects and because of that, the vcpu should retry.
>

Yeah, this can be one approach. I think it will come down to the
performance impact of using mutex which I don't think should be a
concern.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2023-01-04  1:00     ` Vipin Sharma
@ 2023-01-04  6:29       ` Mingwei Zhang
  2023-01-04  6:57         ` Mingwei Zhang
  2023-01-18 17:36         ` Sean Christopherson
  0 siblings, 2 replies; 47+ messages in thread
From: Mingwei Zhang @ 2023-01-04  6:29 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, dmatlack, kvm, linux-kernel

On Tue, Jan 3, 2023 at 5:00 PM Vipin Sharma <vipinsh@google.com> wrote:
>
> On Tue, Jan 3, 2023 at 11:32 AM Mingwei Zhang <mizhang@google.com> wrote:
> >
> > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > >
> > > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > > +                                    spinlock_t *cache_lock)
> > > +{
> > > +       int orig_nobjs;
> > > +
> > > +       spin_lock(cache_lock);
> > > +       orig_nobjs = cache->nobjs;
> > > +       kvm_mmu_free_memory_cache(cache);
> > > +       if (orig_nobjs)
> > > +               percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> > > +
> > > +       spin_unlock(cache_lock);
> > > +}
> >
> > I think the mmu_cache allocation and deallocation may cause the usage
> > of GFP_ATOMIC (as observed by other reviewers as well). Adding a new
> > lock would definitely sound like a plan, but I think it might affect
> > the performance. Alternatively, I am wondering if we could use a
> > mmu_cache_sequence similar to mmu_notifier_seq to help avoid the
> > concurrency?
> >
>
> Can you explain more about the performance impact? Each vcpu will have
> its own mutex. So, only contention will be with the mmu_shrinker. This
> shrinker will use mutex_try_lock() which will not block to wait for
> the lock, it will just pass on to the next vcpu. While shrinker is
> holding the lock, vcpu will be blocked in the page fault path but I
> think it should not have a huge impact considering it will execute
> rarely and for a small time.
>
> > Similar to mmu_notifier_seq, mmu_cache_sequence should be protected by
> > mmu write lock. In the page fault path, each vcpu has to collect a
> > snapshot of  mmu_cache_sequence before calling into
> > mmu_topup_memory_caches() and check the value again when holding the
> > mmu lock. If the value is different, that means the mmu_shrinker has
> > removed the cache objects and because of that, the vcpu should retry.
> >
>
> Yeah, this can be one approach. I think it will come down to the
> performance impact of using mutex which I don't think should be a
> concern.

hmm, I think you are right that there is no performance overhead by
adding a mutex and letting the shrinker using mutex_trylock(). The
point of using a sequence counter is to avoid the new lock, since
introducing a new lock will increase management burden. So unless it
is necessary, we probably should choose a simple solution first.

In this case, I think we do have such a choice and since a similar
mechanism has already been used by mmu_notifiers.

best
-Mingwei

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2023-01-04  6:29       ` Mingwei Zhang
@ 2023-01-04  6:57         ` Mingwei Zhang
  2023-01-18 17:36         ` Sean Christopherson
  1 sibling, 0 replies; 47+ messages in thread
From: Mingwei Zhang @ 2023-01-04  6:57 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: seanjc, pbonzini, bgardon, dmatlack, kvm, linux-kernel

On Tue, Jan 3, 2023 at 10:29 PM Mingwei Zhang <mizhang@google.com> wrote:
>
> On Tue, Jan 3, 2023 at 5:00 PM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > On Tue, Jan 3, 2023 at 11:32 AM Mingwei Zhang <mizhang@google.com> wrote:
> > >
> > > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > > >
> > > > +static void mmu_free_sp_memory_cache(struct kvm_mmu_memory_cache *cache,
> > > > +                                    spinlock_t *cache_lock)
> > > > +{
> > > > +       int orig_nobjs;
> > > > +
> > > > +       spin_lock(cache_lock);
> > > > +       orig_nobjs = cache->nobjs;
> > > > +       kvm_mmu_free_memory_cache(cache);
> > > > +       if (orig_nobjs)
> > > > +               percpu_counter_sub(&kvm_total_unused_mmu_pages, orig_nobjs);
> > > > +
> > > > +       spin_unlock(cache_lock);
> > > > +}
> > >
> > > I think the mmu_cache allocation and deallocation may cause the usage
> > > of GFP_ATOMIC (as observed by other reviewers as well). Adding a new
> > > lock would definitely sound like a plan, but I think it might affect
> > > the performance. Alternatively, I am wondering if we could use a
> > > mmu_cache_sequence similar to mmu_notifier_seq to help avoid the
> > > concurrency?
> > >
> >
> > Can you explain more about the performance impact? Each vcpu will have
> > its own mutex. So, only contention will be with the mmu_shrinker. This
> > shrinker will use mutex_try_lock() which will not block to wait for
> > the lock, it will just pass on to the next vcpu. While shrinker is
> > holding the lock, vcpu will be blocked in the page fault path but I
> > think it should not have a huge impact considering it will execute
> > rarely and for a small time.
> >
> > > Similar to mmu_notifier_seq, mmu_cache_sequence should be protected by
> > > mmu write lock. In the page fault path, each vcpu has to collect a
> > > snapshot of  mmu_cache_sequence before calling into
> > > mmu_topup_memory_caches() and check the value again when holding the
> > > mmu lock. If the value is different, that means the mmu_shrinker has
> > > removed the cache objects and because of that, the vcpu should retry.
> > >
> >
> > Yeah, this can be one approach. I think it will come down to the
> > performance impact of using mutex which I don't think should be a
> > concern.
>
> hmm, I think you are right that there is no performance overhead by
> adding a mutex and letting the shrinker using mutex_trylock(). The
> point of using a sequence counter is to avoid the new lock, since
> introducing a new lock will increase management burden. So unless it
> is necessary, we probably should choose a simple solution first.
>
> In this case, I think we do have such a choice and since a similar
> mechanism has already been used by mmu_notifiers.
>

Let me take it back. The per-vcpu sequence number in this case has to
be protected by a VM level mmu write lock. I think this might be less
performant than using a per-vcpu mutex.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
                     ` (2 preceding siblings ...)
  2023-01-03 19:32   ` Mingwei Zhang
@ 2023-01-16  4:14   ` kernel test robot
  3 siblings, 0 replies; 47+ messages in thread
From: kernel test robot @ 2023-01-16  4:14 UTC (permalink / raw)
  To: Vipin Sharma
  Cc: oe-lkp, lkp, Sean Christopherson, kvm, pbonzini, bgardon,
	dmatlack, linux-kernel, Vipin Sharma

[-- Attachment #1: Type: text/plain, Size: 7588 bytes --]

Greeting,

FYI, we noticed BUG:sleeping_function_called_from_invalid_context_at_include/linux/sched/mm.h due to commit (built with gcc-11):

commit: 99e2853d906a7593e6a3f0e5bc7ecc503b6b9462 ("[Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches")
url: https://github.com/intel-lab-lkp/linux/commits/Vipin-Sharma/NUMA-aware-page-table-s-pages-allocation/20221222-104911
base: https://git.kernel.org/cgit/virt/kvm/kvm.git queue
patch link: https://lore.kernel.org/all/20221222023457.1764-2-vipinsh@google.com/
patch subject: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches

in testcase: kvm-unit-tests-qemu
version: kvm-unit-tests-x86_64-e11a0e2-1_20230106

on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):


[  159.416792][T16345] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:274
[  159.426638][T16345] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 16345, name: qemu-system-x86
[  159.426641][T16345] preempt_count: 1, expected: 0
[  159.426644][T16345] CPU: 122 PID: 16345 Comm: qemu-system-x86 Not tainted 6.1.0-rc8-00451-g99e2853d906a #1
[  159.426647][T16345] Call Trace:
[  159.426649][T16345]  <TASK>
[159.426650][T16345] dump_stack_lvl (lib/dump_stack.c:107 (discriminator 1)) 
[159.445592][T16345] __might_resched.cold (kernel/sched/core.c:9909) 
[159.459683][T16345] ? __kvm_mmu_topup_memory_cache (arch/x86/kvm/../../../virt/kvm/kvm_main.c:411) kvm
[159.472465][T16345] __kmem_cache_alloc_node (include/linux/sched/mm.h:274 mm/slab.h:710 mm/slub.c:3318 mm/slub.c:3437) 
[159.479626][T16345] ? kasan_set_track (mm/kasan/common.c:52) 
[159.486869][T16345] ? __kvm_mmu_topup_memory_cache (arch/x86/kvm/../../../virt/kvm/kvm_main.c:411) kvm
[159.503129][T16345] __kmalloc_node (include/linux/kasan.h:211 mm/slab_common.c:955 mm/slab_common.c:962) 
[159.510635][T16345] __kvm_mmu_topup_memory_cache (arch/x86/kvm/../../../virt/kvm/kvm_main.c:411) kvm
[159.525074][T16345] ? _raw_write_lock_irq (kernel/locking/spinlock.c:153) 
[159.533706][T16345] ? down_read (arch/x86/include/asm/atomic64_64.h:34 include/linux/atomic/atomic-long.h:41 include/linux/atomic/atomic-instrumented.h:1280 kernel/locking/rwsem.c:176 kernel/locking/rwsem.c:181 kernel/locking/rwsem.c:249 kernel/locking/rwsem.c:1259 kernel/locking/rwsem.c:1269 kernel/locking/rwsem.c:1511) 
[159.533710][T16345] mmu_topup_memory_caches (arch/x86/kvm/mmu/mmu.c:670 arch/x86/kvm/mmu/mmu.c:686) kvm
[159.547875][T16345] kvm_mmu_load (arch/x86/kvm/mmu/mmu.c:5436) kvm
[159.556325][T16345] vcpu_enter_guest+0x1ad7/0x30f0 kvm
[159.571283][T16345] ? ttwu_queue_wakelist (kernel/sched/core.c:3844 kernel/sched/core.c:3839) 
[159.577747][T16345] ? vmx_prepare_switch_to_guest (arch/x86/kvm/vmx/vmx.c:1322) kvm_intel
[159.593219][T16345] ? kvm_check_and_inject_events (arch/x86/kvm/x86.c:10215) kvm
[159.600193][T16345] ? try_to_wake_up (include/linux/sched.h:2239 kernel/sched/core.c:4197) 
[159.600197][T16345] ? kernel_fpu_begin_mask (arch/x86/kernel/fpu/core.c:137) 
[159.616366][T16345] vcpu_run (arch/x86/kvm/x86.c:10687) kvm
[159.623697][T16345] ? fpu_swap_kvm_fpstate (arch/x86/kernel/fpu/core.c:368) 
[159.623700][T16345] kvm_arch_vcpu_ioctl_run (arch/x86/kvm/x86.c:10908) kvm
[159.640555][T16345] kvm_vcpu_ioctl (arch/x86/kvm/../../../virt/kvm/kvm_main.c:4107) kvm
[159.649090][T16345] ? vfs_fileattr_set (fs/ioctl.c:774) 
[159.649094][T16345] ? kvm_dying_cpu (arch/x86/kvm/../../../virt/kvm/kvm_main.c:4063) kvm
[159.659190][T16345] ? do_futex (kernel/futex/syscalls.c:111) 
[159.673538][T16345] ? __x64_sys_get_robust_list (kernel/futex/syscalls.c:87) 
[159.673542][T16345] ? __x64_sys_rt_sigaction (kernel/signal.c:4242) 
[159.680957][T16345] ? _raw_spin_lock_bh (kernel/locking/spinlock.c:169) 
[159.680960][T16345] ? __x64_sys_futex (kernel/futex/syscalls.c:183 kernel/futex/syscalls.c:164 kernel/futex/syscalls.c:164) 
[159.697043][T16345] ? __fget_files (arch/x86/include/asm/atomic64_64.h:22 include/linux/atomic/atomic-arch-fallback.h:2363 include/linux/atomic/atomic-arch-fallback.h:2388 include/linux/atomic/atomic-arch-fallback.h:2404 include/linux/atomic/atomic-long.h:497 include/linux/atomic/atomic-instrumented.h:1854 fs/file.c:882 fs/file.c:913) 
[159.697047][T16345] __x64_sys_ioctl (fs/ioctl.c:52 fs/ioctl.c:870 fs/ioctl.c:856 fs/ioctl.c:856) 
[159.704290][T16345] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) 
[159.714133][T16345] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120) 
[  159.714136][T16345] RIP: 0033:0x7f1ca20ffcc7
[ 159.728227][T16345] Code: 00 00 00 48 8b 05 c9 91 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 99 91 0c 00 f7 d8 64 89 01 48
All code
========
   0:	00 00                	add    %al,(%rax)
   2:	00 48 8b             	add    %cl,-0x75(%rax)
   5:	05 c9 91 0c 00       	add    $0xc91c9,%eax
   a:	64 c7 00 26 00 00 00 	movl   $0x26,%fs:(%rax)
  11:	48 c7 c0 ff ff ff ff 	mov    $0xffffffffffffffff,%rax
  18:	c3                   	retq   
  19:	66 2e 0f 1f 84 00 00 	nopw   %cs:0x0(%rax,%rax,1)
  20:	00 00 00 
  23:	b8 10 00 00 00       	mov    $0x10,%eax
  28:	0f 05                	syscall 
  2a:*	48 3d 01 f0 ff ff    	cmp    $0xfffffffffffff001,%rax		<-- trapping instruction
  30:	73 01                	jae    0x33
  32:	c3                   	retq   
  33:	48 8b 0d 99 91 0c 00 	mov    0xc9199(%rip),%rcx        # 0xc91d3
  3a:	f7 d8                	neg    %eax
  3c:	64 89 01             	mov    %eax,%fs:(%rcx)
  3f:	48                   	rex.W

Code starting with the faulting instruction
===========================================
   0:	48 3d 01 f0 ff ff    	cmp    $0xfffffffffffff001,%rax
   6:	73 01                	jae    0x9
   8:	c3                   	retq   
   9:	48 8b 0d 99 91 0c 00 	mov    0xc9199(%rip),%rcx        # 0xc91a9
  10:	f7 d8                	neg    %eax
  12:	64 89 01             	mov    %eax,%fs:(%rcx)
  15:	48                   	rex.W
[  159.728230][T16345] RSP: 002b:00007f1ca11ea848 EFLAGS: 00000246
[  159.736078][T16345]  ORIG_RAX: 0000000000000010
[  159.736080][T16345] RAX: ffffffffffffffda RBX: 000000000000ae80 RCX: 00007f1ca20ffcc7
[  159.736082][T16345] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000000e
[  159.751470][T16345] RBP: 0000555803999500 R08: 0000000000000000 R09: 0000555801cd6d80
[  159.751472][T16345] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[  159.761055][T16345] R13: 0000555801cdd060 R14: 00007f1ca11eab00 R15: 0000000000802000
[  159.761058][T16345]  </TASK>
[  159.780317][T16345] x86/split lock detection: #AC: qemu-system-x86/16345 took a split_lock trap at address: 0x1e3


If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <yujie.liu@intel.com>
| Link: https://lore.kernel.org/oe-lkp/202301161108.4c2174c6-yujie.liu@intel.com


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

[-- Attachment #2: config-6.1.0-rc8-00451-g99e2853d906a --]
[-- Type: text/plain, Size: 165863 bytes --]

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 6.1.0-rc8 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-11 (Debian 11.3.0-8) 11.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=110300
CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=23990
CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=23990
CONFIG_LLD_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_PAHOLE_VERSION=123
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
# CONFIG_WERROR is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_ZSTD=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_KERNEL_ZSTD is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_WATCH_QUEUE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_USELIB is not set
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_GENERIC_IRQ_INJECTION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_CONTEXT_TRACKING=y
CONFIG_CONTEXT_TRACKING_IDLE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y
CONFIG_CONTEXT_TRACKING_USER=y
# CONFIG_CONTEXT_TRACKING_USER_FORCE is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US=100
# end of Timers subsystem

CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y

#
# BPF subsystem
#
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
# CONFIG_BPF_PRELOAD is not set
# CONFIG_BPF_LSM is not set
# end of BPF subsystem

CONFIG_PREEMPT_VOLUNTARY_BUILD=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
# CONFIG_PREEMPT_DYNAMIC is not set
# CONFIG_SCHED_CORE is not set

#
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_SCHED_AVG_IRQ=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_SRCU=y
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RUDE_RCU=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
CONFIG_RCU_NOCB_CPU=y
# CONFIG_RCU_NOCB_CPU_DEFAULT_ALL is not set
# end of RCU Subsystem

CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
# CONFIG_PRINTK_INDEX is not set
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y

#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC12_NO_ARRAY_BOUNDS=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
# CONFIG_CGROUP_FAVOR_DYNMODS is not set
CONFIG_MEMCG=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
# CONFIG_CGROUP_BPF is not set
# CONFIG_CGROUP_MISC is not set
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_RD_ZSTD=y
# CONFIG_BOOT_CONFIG is not set
CONFIG_INITRAMFS_PRESERVE_MTIME=y
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_LD_ORPHAN_WARN=y
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_GUEST_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# end of General setup

CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_CSUM=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_NR_GPIO=1024
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_AUDIT_ARCH=y
CONFIG_KASAN_SHADOW_OFFSET=0xdffffc0000000000
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_DYNAMIC_PHYSICAL_MASK=y
CONFIG_PGTABLE_LEVELS=5
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

#
# Processor type and features
#
CONFIG_SMP=y
CONFIG_X86_FEATURE_NAMES=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_GOLDFISH is not set
# CONFIG_X86_CPU_RESCTRL is not set
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_NUMACHIP is not set
# CONFIG_X86_VSMP is not set
CONFIG_X86_UV=y
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=y
# CONFIG_X86_AMD_PLATFORM_DEVICE is not set
CONFIG_IOSF_MBI=y
# CONFIG_IOSF_MBI_DEBUG is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
# CONFIG_XEN is not set
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_PVH is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
CONFIG_INTEL_TDX_GUEST=y
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_CPU_SUP_ZHAOXIN=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_BOOT_VESA_SUPPORT=y
CONFIG_MAXSMP=y
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
CONFIG_SCHED_CLUSTER=y
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCELOG_LEGACY=y
CONFIG_X86_MCE_INTEL=y
# CONFIG_X86_MCE_AMD is not set
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=m
CONFIG_PERF_EVENTS_INTEL_RAPL=m
CONFIG_PERF_EVENTS_INTEL_CSTATE=m
# CONFIG_PERF_EVENTS_AMD_POWER is not set
# CONFIG_PERF_EVENTS_AMD_UNCORE is not set
# CONFIG_PERF_EVENTS_AMD_BRS is not set
# end of Performance monitoring

CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
# CONFIG_MICROCODE_AMD is not set
CONFIG_MICROCODE_LATE_LOADING=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_X86_5LEVEL=y
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
CONFIG_X86_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT is not set
CONFIG_NUMA=y
# CONFIG_AMD_NUMA is not set
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=10
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=m
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
# CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_X86_UMIP=y
CONFIG_CC_HAS_IBT=y
# CONFIG_X86_KERNEL_IBT is not set
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
# CONFIG_X86_INTEL_TSX_MODE_OFF is not set
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
CONFIG_X86_INTEL_TSX_MODE_AUTO=y
# CONFIG_X86_SGX is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
# CONFIG_KEXEC_SIG is not set
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
# CONFIG_RANDOMIZE_BASE is not set
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_HOTPLUG_CPU=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_XONLY=y
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
# CONFIG_STRICT_SIGALTSTACK_SIZE is not set
CONFIG_HAVE_LIVEPATCH=y
CONFIG_LIVEPATCH=y
# end of Processor type and features

CONFIG_CC_HAS_SLS=y
CONFIG_CC_HAS_RETURN_THUNK=y
CONFIG_SPECULATION_MITIGATIONS=y
CONFIG_PAGE_TABLE_ISOLATION=y
# CONFIG_RETPOLINE is not set
CONFIG_CPU_IBPB_ENTRY=y
CONFIG_CPU_IBRS_ENTRY=y
# CONFIG_SLS is not set
CONFIG_ARCH_HAS_ADD_PAGES=y
CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_USERSPACE_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_PM_TRACE_RTC is not set
CONFIG_PM_CLK=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
# CONFIG_ACPI_FPDT is not set
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
CONFIG_ACPI_EC_DEBUGFS=m
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_TAD=m
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_PLATFORM_PROFILE=m
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_SBS=m
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
CONFIG_ACPI_BGRT=y
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=m
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_DPTF is not set
CONFIG_ACPI_WATCHDOG=y
CONFIG_ACPI_EXTLOG=m
CONFIG_ACPI_ADXL=y
# CONFIG_ACPI_CONFIGFS is not set
# CONFIG_ACPI_PFRUT is not set
CONFIG_ACPI_PCC=y
CONFIG_PMIC_OPREGION=y
CONFIG_ACPI_PRMT=y
CONFIG_X86_PM_TIMER=y

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
# CONFIG_X86_PCC_CPUFREQ is not set
# CONFIG_X86_AMD_PSTATE is not set
# CONFIG_X86_AMD_PSTATE_UT is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
# CONFIG_X86_POWERNOW_K8 is not set
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_CPU_IDLE_GOV_TEO is not set
# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
CONFIG_HALTPOLL_CPUIDLE=y
# end of CPU Idle

CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_MMCONF_FAM10H=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_X86_X32_ABI is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
# end of Binary Emulations

CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_PFNCACHE=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_DIRTY_RING=y
CONFIG_HAVE_KVM_DIRTY_RING_TSO=y
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_HAVE_KVM_PM_NOTIFIER=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_INTEL=m
# CONFIG_KVM_AMD is not set
CONFIG_KVM_SMM=y
# CONFIG_KVM_XEN is not set
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y

#
# General architecture-dependent options
#
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_HOTPLUG_SMT=y
CONFIG_GENERIC_ENTRY=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_KRETPROBE_ON_RETHOOK=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_ARCH_WANTS_NO_INSTR=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_RUST=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_MMU_GATHER_MERGE_VMAS=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# CONFIG_SECCOMP_CACHE_DEBUG is not set
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y
CONFIG_LTO_NONE=y
CONFIG_ARCH_SUPPORTS_CFI_CLANG=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING_USER=y
CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PUD=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_HAVE_ARCH_HUGE_VMALLOC=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y
CONFIG_SOFTIRQ_ON_OWN_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_PAGE_SIZE_LESS_THAN_64KB=y
CONFIG_PAGE_SIZE_LESS_THAN_256KB=y
CONFIG_HAVE_OBJTOOL=y
CONFIG_HAVE_JUMP_LABEL_HACK=y
CONFIG_HAVE_NOINSTR_HACK=y
CONFIG_HAVE_NOINSTR_VALIDATION=y
CONFIG_HAVE_UACCESS_VALIDATION=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET=y
# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_ARCH_HAS_CC_PLATFORM=y
CONFIG_HAVE_STATIC_CALL=y
CONFIG_HAVE_STATIC_CALL_INLINE=y
CONFIG_HAVE_PREEMPT_DYNAMIC=y
CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y
CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y
CONFIG_ARCH_HAS_ELFCORE_COMPAT=y
CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y
CONFIG_DYNAMIC_SIGFRAME=y
CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=y
CONFIG_GCC_PLUGINS=y
# CONFIG_GCC_PLUGIN_LATENT_ENTROPY is not set
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=y
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
CONFIG_MODULE_SIG_SHA256=y
# CONFIG_MODULE_SIG_SHA384 is not set
# CONFIG_MODULE_SIG_SHA512 is not set
CONFIG_MODULE_SIG_HASH="sha256"
CONFIG_MODULE_COMPRESS_NONE=y
# CONFIG_MODULE_COMPRESS_GZIP is not set
# CONFIG_MODULE_COMPRESS_XZ is not set
# CONFIG_MODULE_COMPRESS_ZSTD is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_MODPROBE_PATH="/sbin/modprobe"
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLOCK_LEGACY_AUTOLOAD=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_DEV_BSG_COMMON=y
CONFIG_BLK_ICQ=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m
# CONFIG_BLK_DEV_ZONED is not set
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
CONFIG_BLK_WBT=y
CONFIG_BLK_WBT_MQ=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
# CONFIG_BLK_CGROUP_IOPRIO is not set
CONFIG_BLK_DEBUG_FS=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set

#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_EFI_PARTITION=y
# end of Partition Types

CONFIG_BLOCK_COMPAT=y
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y
CONFIG_BLOCK_HOLDER_DEPRECATED=y
CONFIG_BLK_MQ_STACKING=y

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=y
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_BFQ_CGROUP_DEBUG is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y

#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
# end of Executable file formats

#
# Memory Management options
#
CONFIG_ZPOOL=y
CONFIG_SWAP=y
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_DEFAULT_ON is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
CONFIG_ZBUD=y
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
CONFIG_ZSMALLOC_STAT=y

#
# SLAB allocator options
#
# CONFIG_SLAB is not set
CONFIG_SLUB=y
CONFIG_SLAB_MERGE_DEFAULT=y
CONFIG_SLAB_FREELIST_RANDOM=y
# CONFIG_SLAB_FREELIST_HARDENED is not set
# CONFIG_SLUB_STATS is not set
CONFIG_SLUB_CPU_PARTIAL=y
# end of SLAB allocator options

CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SPARSEMEM=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_EXCLUSIVE_SYSTEM_RAM=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_MEMORY_HOTPLUG=y
# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_MHP_MEMMAP_ON_MEMORY=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_MEMORY_BALLOON=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_DEVICE_MIGRATION=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_THP_SWAP=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_FRONTSWAP=y
# CONFIG_CMA is not set
CONFIG_GENERIC_EARLY_IOREMAP=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_PAGE_IDLE_FLAG=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ZONE_DMA=y
CONFIG_ZONE_DMA32=y
CONFIG_ZONE_DEVICE=y
CONFIG_GET_FREE_REGION=y
CONFIG_DEVICE_PRIVATE=y
CONFIG_VMAP_PFN=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_TEST is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_SECRETMEM=y
# CONFIG_ANON_VMA_NAME is not set
# CONFIG_USERFAULTFD is not set
# CONFIG_LRU_GEN is not set

#
# Data Access Monitoring
#
# CONFIG_DAMON is not set
# end of Data Access Monitoring
# end of Memory Management options

CONFIG_NET=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_SKB_EXTENSIONS=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_SCM=y
CONFIG_AF_UNIX_OOB=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
CONFIG_TLS_DEVICE=y
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_OFFLOAD=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_USER_COMPAT is not set
# CONFIG_XFRM_INTERFACE is not set
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
CONFIG_XDP_SOCKETS=y
# CONFIG_XDP_SOCKETS_DIAG is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IP_TUNNEL=m
CONFIG_NET_IPGRE=m
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_NET_UDP_TUNNEL=m
# CONFIG_NET_FOU is not set
# CONFIG_NET_FOU_IP_TUNNELS is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_ESP_OFFLOAD=m
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
CONFIG_INET_RAW_DIAG=m
# CONFIG_INET_DIAG_DESTROY is not set
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_NV=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
# CONFIG_TCP_CONG_CDG is not set
CONFIG_TCP_CONG_BBR=m
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_ESP_OFFLOAD=m
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
# CONFIG_IPV6_ILA is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
CONFIG_IPV6_VTI=m
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_GRE=m
CONFIG_IPV6_MULTIPLE_TABLES=y
# CONFIG_IPV6_SUBTREES is not set
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
# CONFIG_IPV6_SEG6_LWTUNNEL is not set
# CONFIG_IPV6_SEG6_HMAC is not set
# CONFIG_IPV6_RPL_LWTUNNEL is not set
# CONFIG_IPV6_IOAM6_LWTUNNEL is not set
CONFIG_NETLABEL=y
# CONFIG_MPTCP is not set
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
CONFIG_NETWORK_PHY_TIMESTAMPING=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=m

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_EGRESS=y
CONFIG_NETFILTER_SKIP_EGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
# CONFIG_NETFILTER_NETLINK_HOOK is not set
# CONFIG_NETFILTER_NETLINK_ACCT is not set
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_SYSLOG=m
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_GLUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NFT_NUMGEN=m
CONFIG_NFT_CT=m
CONFIG_NFT_CONNLIMIT=m
CONFIG_NFT_LOG=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_MASQ=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_NAT=m
# CONFIG_NFT_TUNNEL is not set
CONFIG_NFT_OBJREF=m
CONFIG_NFT_QUEUE=m
CONFIG_NFT_QUOTA=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_REJECT_INET=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_HASH=m
CONFIG_NFT_FIB=m
CONFIG_NFT_FIB_INET=m
# CONFIG_NFT_XFRM is not set
CONFIG_NFT_SOCKET=m
# CONFIG_NFT_OSF is not set
# CONFIG_NFT_TPROXY is not set
# CONFIG_NFT_SYNPROXY is not set
CONFIG_NF_DUP_NETDEV=m
CONFIG_NFT_DUP_NETDEV=m
CONFIG_NFT_FWD_NETDEV=m
CONFIG_NFT_FIB_NETDEV=m
# CONFIG_NFT_REJECT_NETDEV is not set
# CONFIG_NF_FLOW_TABLE is not set
CONFIG_NETFILTER_XTABLES=y
CONFIG_NETFILTER_XTABLES_COMPAT=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
# CONFIG_NETFILTER_XT_TARGET_LED is not set
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
# CONFIG_NETFILTER_XT_MATCH_TIME is not set
# CONFIG_NETFILTER_XT_MATCH_U32 is not set
# end of Core Netfilter Configuration

CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
CONFIG_IP_VS_FO=m
CONFIG_IP_VS_OVF=m
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
# CONFIG_IP_VS_MH is not set
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m
# CONFIG_IP_VS_TWOS is not set

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=m
CONFIG_NFT_DUP_IPV4=m
CONFIG_NFT_FIB_IPV4=m
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_DUP_IPV4=m
CONFIG_NF_LOG_ARP=m
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_SYNPROXY=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
# CONFIG_IP_NF_TARGET_CLUSTERIP is not set
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=m
CONFIG_NFT_DUP_IPV6=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
# CONFIG_IP6_NF_MATCH_SRH is not set
# CONFIG_IP6_NF_TARGET_HL is not set
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_TARGET_SYNPROXY=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_TABLES_BRIDGE=m
# CONFIG_NFT_BRIDGE_META is not set
CONFIG_NFT_BRIDGE_REJECT=m
# CONFIG_NF_CONNTRACK_BRIDGE is not set
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
# CONFIG_IP_DCCP is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
# CONFIG_RDS is not set
CONFIG_TIPC=m
CONFIG_TIPC_MEDIA_UDP=y
CONFIG_TIPC_CRYPTO=y
CONFIG_TIPC_DIAG=m
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
# CONFIG_ATM_MPOA is not set
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=m
CONFIG_GARP=m
CONFIG_MRP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
# CONFIG_BRIDGE_MRP is not set
# CONFIG_BRIDGE_CFM is not set
# CONFIG_NET_DSA is not set
CONFIG_VLAN_8021Q=m
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
CONFIG_6LOWPAN=m
# CONFIG_6LOWPAN_DEBUGFS is not set
# CONFIG_6LOWPAN_NHC is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
# CONFIG_NET_SCH_CBS is not set
# CONFIG_NET_SCH_ETF is not set
# CONFIG_NET_SCH_TAPRIO is not set
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
# CONFIG_NET_SCH_SKBPRIO is not set
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=y
# CONFIG_NET_SCH_CAKE is not set
CONFIG_NET_SCH_FQ=m
CONFIG_NET_SCH_HHF=m
CONFIG_NET_SCH_PIE=m
# CONFIG_NET_SCH_FQ_PIE is not set
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
# CONFIG_NET_SCH_ETS is not set
CONFIG_NET_SCH_DEFAULT=y
# CONFIG_DEFAULT_FQ is not set
# CONFIG_DEFAULT_CODEL is not set
CONFIG_DEFAULT_FQ_CODEL=y
# CONFIG_DEFAULT_SFQ is not set
# CONFIG_DEFAULT_PFIFO_FAST is not set
CONFIG_DEFAULT_NET_SCH="fq_codel"

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
# CONFIG_NET_EMATCH_CANID is not set
CONFIG_NET_EMATCH_IPSET=m
# CONFIG_NET_EMATCH_IPT is not set
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
# CONFIG_NET_ACT_IPT is not set
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
# CONFIG_NET_ACT_MPLS is not set
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_ACT_BPF=m
# CONFIG_NET_ACT_CONNMARK is not set
# CONFIG_NET_ACT_CTINFO is not set
CONFIG_NET_ACT_SKBMOD=m
# CONFIG_NET_ACT_IFE is not set
CONFIG_NET_ACT_TUNNEL_KEY=m
# CONFIG_NET_ACT_GATE is not set
# CONFIG_NET_TC_SKB_EXT is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_GRE=m
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_VMWARE_VMCI_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_NETLINK_DIAG=m
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=y
CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=y
# CONFIG_HSR is not set
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_PCPU_DEV_REFCNT=y
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_SOCK_RX_QUEUE_MAPPING=y
CONFIG_XPS=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options

# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
# CONFIG_CAN_J1939 is not set
# CONFIG_CAN_ISOTP is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
# CONFIG_MCTP is not set
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
# CONFIG_CFG80211_WEXT is not set
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
# CONFIG_MAC80211_MESH is not set
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
# CONFIG_RFKILL_GPIO is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_FD=y
CONFIG_NET_9P_VIRTIO=y
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
# CONFIG_NFC is not set
CONFIG_PSAMPLE=m
# CONFIG_NET_IFE is not set
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SELFTESTS=y
CONFIG_NET_SOCK_MSG=y
CONFIG_PAGE_POOL=y
# CONFIG_PAGE_POOL_STATS is not set
CONFIG_FAILOVER=m
CONFIG_ETHTOOL_NETLINK=y

#
# Device Drivers
#
CONFIG_HAVE_EISA=y
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIEAER_INJECT=m
CONFIG_PCIE_ECRC=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_PCIE_DPC=y
# CONFIG_PCIE_PTM is not set
# CONFIG_PCIE_EDR is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_IRQ_DOMAIN=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=y
CONFIG_PCI_PF_STUB=m
CONFIG_PCI_ATS=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_P2PDMA is not set
CONFIG_PCI_LABEL=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=64
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=m
# CONFIG_HOTPLUG_PCI_CPCI is not set
CONFIG_HOTPLUG_PCI_SHPC=y

#
# PCI controller drivers
#
CONFIG_VMD=y

#
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCI_MESON is not set
# end of DesignWare PCI Core Support

#
# Mobiveil PCIe Core Support
#
# end of Mobiveil PCIe Core Support

#
# Cadence PCIe controllers support
#
# end of Cadence PCIe controllers support
# end of PCI controller drivers

#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers

# CONFIG_CXL_BUS is not set
# CONFIG_PCCARD is not set
# CONFIG_RAPIDIO is not set

#
# Generic Driver Options
#
CONFIG_AUXILIARY_BUS=y
# CONFIG_UEVENT_HELPER is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_DEVTMPFS_SAFE is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y

#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_FW_LOADER_SYSFS=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
# CONFIG_FW_LOADER_COMPRESS is not set
CONFIG_FW_CACHE=y
# CONFIG_FW_UPLOAD is not set
# end of Firmware loader

CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_REGMAP_SPI=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# end of Generic Driver Options

#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# CONFIG_MHI_BUS_EP is not set
# end of Bus devices

CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y

#
# Firmware Drivers
#

#
# ARM System Control and Management Interface Protocol
#
# end of ARM System Control and Management Interface Protocol

CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
# CONFIG_ISCSI_IBFT is not set
CONFIG_FW_CFG_SYSFS=y
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
CONFIG_SYSFB=y
# CONFIG_SYSFB_SIMPLEFB is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_ESRT=y
CONFIG_EFI_VARS_PSTORE=y
CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_DXE_MEM_ATTRIBUTES=y
CONFIG_EFI_RUNTIME_WRAPPERS=y
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_APPLE_PROPERTIES is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
# CONFIG_EFI_RCI2_TABLE is not set
# CONFIG_EFI_DISABLE_PCI_DMA is not set
CONFIG_EFI_EARLYCON=y
CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
# CONFIG_EFI_DISABLE_RUNTIME is not set
# CONFIG_EFI_COCO_SECRET is not set
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

# CONFIG_GNSS is not set
# CONFIG_MTD is not set
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_SERIAL=m
# CONFIG_PARPORT_PC_FIFO is not set
# CONFIG_PARPORT_PC_SUPERIO is not set
# CONFIG_PARPORT_AX88796 is not set
CONFIG_PARPORT_1284=y
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=m
# CONFIG_BLK_DEV_FD is not set
CONFIG_CDROM=m
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_ZRAM is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
# CONFIG_BLK_DEV_DRBD is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=16384
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_UBLK is not set

#
# NVME Support
#
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_VERBOSE_ERRORS is not set
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
# CONFIG_NVME_FC is not set
# CONFIG_NVME_TCP is not set
# CONFIG_NVME_AUTH is not set
CONFIG_NVME_TARGET=m
# CONFIG_NVME_TARGET_PASSTHRU is not set
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_FC=m
# CONFIG_NVME_TARGET_TCP is not set
# CONFIG_NVME_TARGET_AUTH is not set
# end of NVME Support

#
# Misc devices
#
CONFIG_SENSORS_LIS3LV02D=m
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
CONFIG_TIFM_CORE=m
CONFIG_TIFM_7XX1=m
# CONFIG_ICS932S401 is not set
CONFIG_ENCLOSURE_SERVICES=m
CONFIG_SGI_XP=m
CONFIG_HP_ILO=m
CONFIG_SGI_GRU=m
# CONFIG_SGI_GRU_DEBUG is not set
CONFIG_APDS9802ALS=m
CONFIG_ISL29003=m
CONFIG_ISL29020=m
CONFIG_SENSORS_TSL2550=m
CONFIG_SENSORS_BH1770=m
CONFIG_SENSORS_APDS990X=m
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
CONFIG_VMWARE_BALLOON=m
# CONFIG_LATTICE_ECP3_CONFIG is not set
# CONFIG_SRAM is not set
# CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
CONFIG_MISC_RTSX=m
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_AT25 is not set
CONFIG_EEPROM_LEGACY=m
CONFIG_EEPROM_MAX6875=m
CONFIG_EEPROM_93CX6=m
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support

CONFIG_CB710_CORE=m
# CONFIG_CB710_DEBUG is not set
CONFIG_CB710_DEBUG_ASSUMPTIONS=y

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline

CONFIG_SENSORS_LIS3_I2C=m
CONFIG_ALTERA_STAPL=m
CONFIG_INTEL_MEI=m
CONFIG_INTEL_MEI_ME=m
# CONFIG_INTEL_MEI_TXE is not set
# CONFIG_INTEL_MEI_GSC is not set
# CONFIG_INTEL_MEI_HDCP is not set
# CONFIG_INTEL_MEI_PXP is not set
CONFIG_VMWARE_VMCI=m
# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_BCM_VK is not set
# CONFIG_MISC_ALCOR_PCI is not set
CONFIG_MISC_RTSX_PCI=m
# CONFIG_MISC_RTSX_USB is not set
# CONFIG_HABANA_AI is not set
# CONFIG_UACCE is not set
CONFIG_PVPANIC=y
# CONFIG_PVPANIC_MMIO is not set
# CONFIG_PVPANIC_PCI is not set
# CONFIG_GP_PCI1XXXX is not set
# end of Misc devices

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m
CONFIG_SCSI_COMMON=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_BLK_DEV_BSG=y
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SRP_ATTRS=m
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_MPI3MR is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_LIBFC is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
CONFIG_SCSI_ISCI=m
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_PPA is not set
# CONFIG_SCSI_IMM is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_EFCT is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_BFA_FC is not set
# CONFIG_SCSI_VIRTIO is not set
# CONFIG_SCSI_CHELSIO_FCOE is not set
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=y
CONFIG_SCSI_DH_HP_SW=y
CONFIG_SCSI_DH_EMC=y
CONFIG_SCSI_DH_ALUA=y
# end of SCSI device support

CONFIG_ATA=m
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
CONFIG_SATA_MOBILE_LPM_POLICY=0
CONFIG_SATA_AHCI_PLATFORM=m
# CONFIG_AHCI_DWC is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_DWC is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
# CONFIG_MD_MULTIPATH is not set
CONFIG_MD_FAULTY=m
CONFIG_MD_CLUSTER=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_SMQ=m
CONFIG_DM_WRITECACHE=m
# CONFIG_DM_EBS is not set
CONFIG_DM_ERA=m
# CONFIG_DM_CLONE is not set
CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
# CONFIG_DM_MULTIPATH_HST is not set
# CONFIG_DM_MULTIPATH_IOA is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
# CONFIG_DM_VERITY_FEC is not set
CONFIG_DM_SWITCH=m
CONFIG_DM_LOG_WRITES=m
CONFIG_DM_INTEGRITY=m
CONFIG_DM_AUDIT=y
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_TCM_USER2=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
# CONFIG_FUSION is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_WIREGUARD is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_IPVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
# CONFIG_AMT is not set
# CONFIG_MACSEC is not set
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
# CONFIG_TUN is not set
# CONFIG_TUN_VNET_CROSS_LE is not set
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
# CONFIG_NET_VRF is not set
# CONFIG_VSOCKMON is not set
# CONFIG_ARCNET is not set
CONFIG_ATM_DRIVERS=y
# CONFIG_ATM_DUMMY is not set
# CONFIG_ATM_TCP is not set
# CONFIG_ATM_LANAI is not set
# CONFIG_ATM_ENI is not set
# CONFIG_ATM_NICSTAR is not set
# CONFIG_ATM_IDT77252 is not set
# CONFIG_ATM_IA is not set
# CONFIG_ATM_FORE200E is not set
# CONFIG_ATM_HE is not set
# CONFIG_ATM_SOLOS is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_AGERE=y
# CONFIG_ET131X is not set
CONFIG_NET_VENDOR_ALACRITECH=y
# CONFIG_SLICOSS is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=y
# CONFIG_ENA_ETHERNET is not set
# CONFIG_NET_VENDOR_AMD is not set
CONFIG_NET_VENDOR_AQUANTIA=y
# CONFIG_AQTION is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ASIX=y
# CONFIG_SPI_AX88796C is not set
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_ATL1C is not set
# CONFIG_ALX is not set
# CONFIG_CX_ECAT is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
# CONFIG_BCMGENET is not set
# CONFIG_BNX2 is not set
# CONFIG_CNIC is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2X is not set
# CONFIG_SYSTEMPORT is not set
# CONFIG_BNXT is not set
CONFIG_NET_VENDOR_CADENCE=y
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_CAVIUM=y
# CONFIG_THUNDER_NIC_PF is not set
# CONFIG_THUNDER_NIC_VF is not set
# CONFIG_THUNDER_NIC_BGX is not set
# CONFIG_THUNDER_NIC_RGX is not set
CONFIG_CAVIUM_PTP=y
# CONFIG_LIQUIDIO is not set
# CONFIG_LIQUIDIO_VF is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
CONFIG_NET_VENDOR_CORTINA=y
CONFIG_NET_VENDOR_DAVICOM=y
# CONFIG_DM9051 is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
# CONFIG_NET_TULIP is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_ENGLEDER=y
# CONFIG_TSNEP is not set
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_FUNGIBLE=y
# CONFIG_FUN_ETH is not set
CONFIG_NET_VENDOR_GOOGLE=y
# CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=y
# CONFIG_HINIC is not set
CONFIG_NET_VENDOR_I825XX=y
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
CONFIG_IXGBE=y
CONFIG_IXGBE_HWMON=y
# CONFIG_IXGBE_DCB is not set
# CONFIG_IXGBE_IPSEC is not set
# CONFIG_IXGBEVF is not set
CONFIG_I40E=y
# CONFIG_I40E_DCB is not set
# CONFIG_I40EVF is not set
# CONFIG_ICE is not set
# CONFIG_FM10K is not set
CONFIG_IGC=y
CONFIG_NET_VENDOR_WANGXUN=y
# CONFIG_NGBE is not set
# CONFIG_TXGBE is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_ADI=y
# CONFIG_ADIN1110 is not set
CONFIG_NET_VENDOR_LITEX=y
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
# CONFIG_SKGE is not set
# CONFIG_SKY2 is not set
# CONFIG_OCTEON_EP is not set
# CONFIG_PRESTERA is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MICROCHIP=y
# CONFIG_ENC28J60 is not set
# CONFIG_ENCX24J600 is not set
# CONFIG_LAN743X is not set
CONFIG_NET_VENDOR_MICROSEMI=y
CONFIG_NET_VENDOR_MICROSOFT=y
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NI=y
# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_NETERION=y
# CONFIG_S2IO is not set
CONFIG_NET_VENDOR_NETRONOME=y
# CONFIG_NFP is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_NE2K_PCI is not set
CONFIG_NET_VENDOR_NVIDIA=y
# CONFIG_FORCEDETH is not set
CONFIG_NET_VENDOR_OKI=y
# CONFIG_ETHOC is not set
CONFIG_NET_VENDOR_PACKET_ENGINES=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_PENSANDO=y
# CONFIG_IONIC is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_NETXEN_NIC is not set
# CONFIG_QED is not set
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
CONFIG_NET_VENDOR_QUALCOMM=y
# CONFIG_QCOM_EMAC is not set
# CONFIG_RMNET is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_ATP is not set
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
CONFIG_R8169=y
CONFIG_NET_VENDOR_RENESAS=y
CONFIG_NET_VENDOR_ROCKER=y
# CONFIG_ROCKER is not set
CONFIG_NET_VENDOR_SAMSUNG=y
# CONFIG_SXGBE_ETH is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
# CONFIG_SC92031 is not set
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
CONFIG_NET_VENDOR_SOLARFLARE=y
# CONFIG_SFC is not set
# CONFIG_SFC_FALCON is not set
# CONFIG_SFC_SIENA is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_SOCIONEXT=y
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_SYNOPSYS=y
# CONFIG_DWC_XLGMAC is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
# CONFIG_TI_CPSW_PHY_SEL is not set
# CONFIG_TLAN is not set
CONFIG_NET_VENDOR_VERTEXCOM=y
# CONFIG_MSE102X is not set
CONFIG_NET_VENDOR_VIA=y
# CONFIG_VIA_RHINE is not set
# CONFIG_VIA_VELOCITY is not set
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_EMACLITE is not set
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLINK=y
CONFIG_PHYLIB=y
CONFIG_SWPHY=y
# CONFIG_LED_TRIGGER_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_SFP is not set

#
# MII PHY device drivers
#
# CONFIG_AMD_PHY is not set
# CONFIG_ADIN_PHY is not set
# CONFIG_ADIN1100_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
CONFIG_AX88796B_PHY=y
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM84881_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_CICADA_PHY is not set
# CONFIG_CORTINA_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_INTEL_XWAY_PHY is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MARVELL_PHY is not set
# CONFIG_MARVELL_10G_PHY is not set
# CONFIG_MARVELL_88X2222_PHY is not set
# CONFIG_MAXLINEAR_GPHY is not set
# CONFIG_MEDIATEK_GE_PHY is not set
# CONFIG_MICREL_PHY is not set
# CONFIG_MICROCHIP_PHY is not set
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
# CONFIG_MOTORCOMM_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_NXP_C45_TJA11XX_PHY is not set
# CONFIG_NXP_TJA11XX_PHY is not set
# CONFIG_QSEMI_PHY is not set
CONFIG_REALTEK_PHY=y
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
# CONFIG_SMSC_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_TERANETICS_PHY is not set
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
# CONFIG_DP83TD510_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
# CONFIG_PSE_CONTROLLER is not set
CONFIG_CAN_DEV=m
CONFIG_CAN_VCAN=m
# CONFIG_CAN_VXCAN is not set
CONFIG_CAN_NETLINK=y
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_CAN327 is not set
# CONFIG_CAN_KVASER_PCIEFD is not set
CONFIG_CAN_SLCAN=m
CONFIG_CAN_C_CAN=m
CONFIG_CAN_C_CAN_PLATFORM=m
CONFIG_CAN_C_CAN_PCI=m
CONFIG_CAN_CC770=m
# CONFIG_CAN_CC770_ISA is not set
CONFIG_CAN_CC770_PLATFORM=m
# CONFIG_CAN_CTUCANFD_PCI is not set
# CONFIG_CAN_IFI_CANFD is not set
# CONFIG_CAN_M_CAN is not set
# CONFIG_CAN_PEAK_PCIEFD is not set
CONFIG_CAN_SJA1000=m
CONFIG_CAN_EMS_PCI=m
# CONFIG_CAN_F81601 is not set
CONFIG_CAN_KVASER_PCI=m
CONFIG_CAN_PEAK_PCI=m
CONFIG_CAN_PEAK_PCIEC=y
CONFIG_CAN_PLX_PCI=m
# CONFIG_CAN_SJA1000_ISA is not set
# CONFIG_CAN_SJA1000_PLATFORM is not set
CONFIG_CAN_SOFTING=m

#
# CAN SPI interfaces
#
# CONFIG_CAN_HI311X is not set
# CONFIG_CAN_MCP251X is not set
# CONFIG_CAN_MCP251XFD is not set
# end of CAN SPI interfaces

#
# CAN USB interfaces
#
# CONFIG_CAN_8DEV_USB is not set
# CONFIG_CAN_EMS_USB is not set
# CONFIG_CAN_ESD_USB is not set
# CONFIG_CAN_ETAS_ES58X is not set
# CONFIG_CAN_GS_USB is not set
# CONFIG_CAN_KVASER_USB is not set
# CONFIG_CAN_MCBA_USB is not set
# CONFIG_CAN_PEAK_USB is not set
# CONFIG_CAN_UCAN is not set
# end of CAN USB interfaces

# CONFIG_CAN_DEBUG_DEVICES is not set
CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
CONFIG_FWNODE_MDIO=y
CONFIG_ACPI_MDIO=y
CONFIG_MDIO_DEVRES=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_MDIO_BCM_UNIMAC is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_THUNDER is not set

#
# MDIO Multiplexers
#

#
# PCS device drivers
#
# end of PCS device drivers

# CONFIG_PLIP is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
CONFIG_USB_NET_DRIVERS=y
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
CONFIG_USB_RTL8152=y
# CONFIG_USB_LAN78XX is not set
CONFIG_USB_USBNET=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_AX88179_178A=y
# CONFIG_USB_NET_CDCETHER is not set
# CONFIG_USB_NET_CDC_EEM is not set
# CONFIG_USB_NET_CDC_NCM is not set
# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set
# CONFIG_USB_NET_CDC_MBIM is not set
# CONFIG_USB_NET_DM9601 is not set
# CONFIG_USB_NET_SR9700 is not set
# CONFIG_USB_NET_SR9800 is not set
# CONFIG_USB_NET_SMSC75XX is not set
# CONFIG_USB_NET_SMSC95XX is not set
# CONFIG_USB_NET_GL620A is not set
# CONFIG_USB_NET_NET1080 is not set
# CONFIG_USB_NET_PLUSB is not set
# CONFIG_USB_NET_MCS7830 is not set
# CONFIG_USB_NET_RNDIS_HOST is not set
# CONFIG_USB_NET_CDC_SUBSET is not set
# CONFIG_USB_NET_ZAURUS is not set
# CONFIG_USB_NET_CX82310_ETH is not set
# CONFIG_USB_NET_KALMIA is not set
# CONFIG_USB_NET_QMI_WWAN is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_NET_INT51X1 is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_USB_SIERRA_NET is not set
# CONFIG_USB_NET_CH9200 is not set
# CONFIG_USB_NET_AQC111 is not set
CONFIG_WLAN=y
CONFIG_WLAN_VENDOR_ADMTEK=y
# CONFIG_ADM8211 is not set
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K is not set
# CONFIG_ATH5K_PCI is not set
# CONFIG_ATH9K is not set
# CONFIG_ATH9K_HTC is not set
# CONFIG_CARL9170 is not set
# CONFIG_ATH6KL is not set
# CONFIG_AR5523 is not set
# CONFIG_WIL6210 is not set
# CONFIG_ATH10K is not set
# CONFIG_WCN36XX is not set
# CONFIG_ATH11K is not set
CONFIG_WLAN_VENDOR_ATMEL=y
# CONFIG_ATMEL is not set
# CONFIG_AT76C50X_USB is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
# CONFIG_B43LEGACY is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
CONFIG_WLAN_VENDOR_CISCO=y
# CONFIG_AIRO is not set
CONFIG_WLAN_VENDOR_INTEL=y
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
# CONFIG_IWL4965 is not set
# CONFIG_IWL3945 is not set
# CONFIG_IWLWIFI is not set
CONFIG_WLAN_VENDOR_INTERSIL=y
# CONFIG_HOSTAP is not set
# CONFIG_HERMES is not set
# CONFIG_P54_COMMON is not set
CONFIG_WLAN_VENDOR_MARVELL=y
# CONFIG_LIBERTAS is not set
# CONFIG_LIBERTAS_THINFIRM is not set
# CONFIG_MWIFIEX is not set
# CONFIG_MWL8K is not set
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
CONFIG_WLAN_VENDOR_MICROCHIP=y
# CONFIG_WILC1000_SDIO is not set
# CONFIG_WILC1000_SPI is not set
CONFIG_WLAN_VENDOR_PURELIFI=y
# CONFIG_PLFXLC is not set
CONFIG_WLAN_VENDOR_RALINK=y
# CONFIG_RT2X00 is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
CONFIG_RTL_CARDS=m
# CONFIG_RTL8192CE is not set
# CONFIG_RTL8192SE is not set
# CONFIG_RTL8192DE is not set
# CONFIG_RTL8723AE is not set
# CONFIG_RTL8723BE is not set
# CONFIG_RTL8188EE is not set
# CONFIG_RTL8192EE is not set
# CONFIG_RTL8821AE is not set
# CONFIG_RTL8192CU is not set
# CONFIG_RTL8XXXU is not set
# CONFIG_RTW88 is not set
# CONFIG_RTW89 is not set
CONFIG_WLAN_VENDOR_RSI=y
# CONFIG_RSI_91X is not set
CONFIG_WLAN_VENDOR_SILABS=y
# CONFIG_WFX is not set
CONFIG_WLAN_VENDOR_ST=y
# CONFIG_CW1200 is not set
CONFIG_WLAN_VENDOR_TI=y
# CONFIG_WL1251 is not set
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
CONFIG_WLAN_VENDOR_ZYDAS=y
# CONFIG_USB_ZD1201 is not set
# CONFIG_ZD1211RW is not set
CONFIG_WLAN_VENDOR_QUANTENNA=y
# CONFIG_QTNFMAC_PCIE is not set
# CONFIG_MAC80211_HWSIM is not set
# CONFIG_USB_NET_RNDIS_WLAN is not set
# CONFIG_VIRT_WIFI is not set
# CONFIG_WAN is not set

#
# Wireless WAN
#
# CONFIG_WWAN is not set
# end of Wireless WAN

# CONFIG_VMXNET3 is not set
# CONFIG_FUJITSU_ES is not set
# CONFIG_NETDEVSIM is not set
CONFIG_NET_FAILOVER=m
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_SPARSEKMAP=m
# CONFIG_INPUT_MATRIXKMAP is not set
CONFIG_INPUT_VIVALDIFMAP=y

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
# CONFIG_KEYBOARD_APPLESPI is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_CYPRESS_SF is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
CONFIG_MOUSE_PS2_SENTELIC=y
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_PS2_FOCALTECH=y
CONFIG_MOUSE_PS2_VMMOUSE=y
CONFIG_MOUSE_PS2_SMBUS=y
CONFIG_MOUSE_SERIAL=m
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
CONFIG_MOUSE_CYAPA=m
CONFIG_MOUSE_ELAN_I2C=m
CONFIG_MOUSE_ELAN_I2C_I2C=y
CONFIG_MOUSE_ELAN_I2C_SMBUS=y
CONFIG_MOUSE_VSXXXAA=m
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
# CONFIG_MOUSE_SYNAPTICS_USB is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
CONFIG_RMI4_CORE=m
CONFIG_RMI4_I2C=m
CONFIG_RMI4_SPI=m
CONFIG_RMI4_SMB=m
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=m
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
CONFIG_RMI4_F34=y
# CONFIG_RMI4_F3A is not set
CONFIG_RMI4_F55=y

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_SERIO_ARC_PS2=m
# CONFIG_SERIO_GPIO_PS2 is not set
# CONFIG_USERIO is not set
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_LDISC_AUTOLOAD=y

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=y
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_EXAR=y
CONFIG_SERIAL_8250_NR_UARTS=64
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
CONFIG_SERIAL_8250_DW=y
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=y
CONFIG_SERIAL_8250_MID=y
CONFIG_SERIAL_8250_PERICOM=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MAX3100 is not set
# CONFIG_SERIAL_MAX310X is not set
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_SERIAL_JSM=m
# CONFIG_SERIAL_LANTIQ is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
CONFIG_SERIAL_ARC=m
CONFIG_SERIAL_ARC_NR_PORTS=1
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_SYNCLINK_GT=m
CONFIG_N_HDLC=m
CONFIG_N_GSM=m
CONFIG_NOZOMI=m
# CONFIG_NULL_TTY is not set
CONFIG_HVC_DRIVER=y
# CONFIG_SERIAL_DEV_BUS is not set
CONFIG_PRINTER=m
# CONFIG_LP_CONSOLE is not set
CONFIG_PPDEV=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_DMI_DECODE=y
CONFIG_IPMI_PLAT_DATA=y
CONFIG_IPMI_PANIC_EVENT=y
CONFIG_IPMI_PANIC_STRING=y
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_SSIF=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_TIMERIOMEM=m
CONFIG_HW_RANDOM_INTEL=m
# CONFIG_HW_RANDOM_AMD is not set
# CONFIG_HW_RANDOM_BA431 is not set
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=y
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_DEVMEM=y
CONFIG_NVRAM=y
CONFIG_DEVPORT=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
# CONFIG_HPET_MMAP_DEFAULT is not set
CONFIG_HANGCHECK_TIMER=m
CONFIG_UV_MMTIMER=m
CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_TIS_SPI is not set
# CONFIG_TCG_TIS_I2C is not set
# CONFIG_TCG_TIS_I2C_CR50 is not set
CONFIG_TCG_TIS_I2C_ATMEL=m
CONFIG_TCG_TIS_I2C_INFINEON=m
CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
CONFIG_TCG_TIS_ST33ZP24=m
CONFIG_TCG_TIS_ST33ZP24_I2C=m
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
CONFIG_TELCLOCK=m
# CONFIG_XILLYBUS is not set
# CONFIG_XILLYUSB is not set
CONFIG_RANDOM_TRUST_CPU=y
CONFIG_RANDOM_TRUST_BOOTLOADER=y
# end of Character devices

#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
# CONFIG_I2C_MUX_GPIO is not set
# CONFIG_I2C_MUX_LTC4306 is not set
# CONFIG_I2C_MUX_PCA9541 is not set
# CONFIG_I2C_MUX_PCA954x is not set
# CONFIG_I2C_MUX_REG is not set
CONFIG_I2C_MUX_MLXCPLD=m
# end of Multiplexer I2C Chip support

CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=y
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_ALGOPCA=m

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_AMD_MP2 is not set
CONFIG_I2C_I801=y
CONFIG_I2C_ISCH=m
CONFIG_I2C_ISMT=m
CONFIG_I2C_PIIX4=m
CONFIG_I2C_NFORCE2=m
CONFIG_I2C_NFORCE2_S4985=m
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
CONFIG_I2C_SIS96X=m
CONFIG_I2C_VIA=m
CONFIG_I2C_VIAPRO=m

#
# ACPI drivers
#
CONFIG_I2C_SCMI=m

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=m
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=m
# CONFIG_I2C_DESIGNWARE_AMDPSP is not set
CONFIG_I2C_DESIGNWARE_BAYTRAIL=y
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
CONFIG_I2C_PCA_PLATFORM=m
CONFIG_I2C_SIMTEC=m
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_CP2615 is not set
CONFIG_I2C_PARPORT=m
# CONFIG_I2C_PCI1XXXX is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
CONFIG_I2C_MLXCPLD=m
# CONFIG_I2C_VIRTIO is not set
# end of I2C Hardware Bus support

CONFIG_I2C_STUB=m
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

# CONFIG_I3C is not set
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
# CONFIG_SPI_MEM is not set

#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_BUTTERFLY is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_DESIGNWARE is not set
# CONFIG_SPI_NXP_FLEXSPI is not set
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LM70_LLP is not set
# CONFIG_SPI_MICROCHIP_CORE is not set
# CONFIG_SPI_MICROCHIP_CORE_QSPI is not set
# CONFIG_SPI_LANTIQ_SSC is not set
# CONFIG_SPI_OC_TINY is not set
# CONFIG_SPI_PXA2XX is not set
# CONFIG_SPI_ROCKCHIP is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_ZYNQMP_GQSPI is not set
# CONFIG_SPI_AMD is not set

#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set

#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
CONFIG_SPI_DYNAMIC=y
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_PTP_1588_CLOCK_OPTIONAL=y
# CONFIG_DP83640_PHY is not set
# CONFIG_PTP_1588_CLOCK_INES is not set
CONFIG_PTP_1588_CLOCK_KVM=m
# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
# CONFIG_PTP_1588_CLOCK_IDTCM is not set
# CONFIG_PTP_1588_CLOCK_VMW is not set
# end of PTP clock support

CONFIG_PINCTRL=y
# CONFIG_DEBUG_PINCTRL is not set
# CONFIG_PINCTRL_AMD is not set
# CONFIG_PINCTRL_CY8C95X0 is not set
# CONFIG_PINCTRL_MCP23S08 is not set
# CONFIG_PINCTRL_SX150X is not set

#
# Intel pinctrl drivers
#
# CONFIG_PINCTRL_BAYTRAIL is not set
# CONFIG_PINCTRL_CHERRYVIEW is not set
# CONFIG_PINCTRL_LYNXPOINT is not set
# CONFIG_PINCTRL_ALDERLAKE is not set
# CONFIG_PINCTRL_BROXTON is not set
# CONFIG_PINCTRL_CANNONLAKE is not set
# CONFIG_PINCTRL_CEDARFORK is not set
# CONFIG_PINCTRL_DENVERTON is not set
# CONFIG_PINCTRL_ELKHARTLAKE is not set
# CONFIG_PINCTRL_EMMITSBURG is not set
# CONFIG_PINCTRL_GEMINILAKE is not set
# CONFIG_PINCTRL_ICELAKE is not set
# CONFIG_PINCTRL_JASPERLAKE is not set
# CONFIG_PINCTRL_LAKEFIELD is not set
# CONFIG_PINCTRL_LEWISBURG is not set
# CONFIG_PINCTRL_METEORLAKE is not set
# CONFIG_PINCTRL_SUNRISEPOINT is not set
# CONFIG_PINCTRL_TIGERLAKE is not set
# end of Intel pinctrl drivers

#
# Renesas pinctrl drivers
#
# end of Renesas pinctrl drivers

CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_GPIO_ACPI=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_CDEV=y
CONFIG_GPIO_CDEV_V1=y

#
# Memory mapped GPIO drivers
#
# CONFIG_GPIO_AMDPT is not set
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_EXAR is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_ICH=m
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCA9570 is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
# end of USB GPIO expanders

#
# Virtual GPIO drivers
#
# CONFIG_GPIO_AGGREGATOR is not set
# CONFIG_GPIO_MOCKUP is not set
# CONFIG_GPIO_VIRTIO is not set
# CONFIG_GPIO_SIM is not set
# end of Virtual GPIO drivers

# CONFIG_W1 is not set
CONFIG_POWER_RESET=y
# CONFIG_POWER_RESET_RESTART is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
# CONFIG_PDA_POWER is not set
# CONFIG_IP5XXX_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SAMSUNG_SDI is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_MANAGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_LTC4162L is not set
# CONFIG_CHARGER_MAX77976 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ2515X is not set
# CONFIG_CHARGER_BQ25890 is not set
# CONFIG_CHARGER_BQ25980 is not set
# CONFIG_CHARGER_BQ256XX is not set
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_BATTERY_GOLDFISH is not set
# CONFIG_BATTERY_RT5033 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_BD99954 is not set
# CONFIG_BATTERY_UG3105 is not set
CONFIG_HWMON=y
CONFIG_HWMON_VID=m
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
CONFIG_SENSORS_ABITUGURU=m
CONFIG_SENSORS_ABITUGURU3=m
# CONFIG_SENSORS_AD7314 is not set
CONFIG_SENSORS_AD7414=m
CONFIG_SENSORS_AD7418=m
CONFIG_SENSORS_ADM1025=m
CONFIG_SENSORS_ADM1026=m
CONFIG_SENSORS_ADM1029=m
CONFIG_SENSORS_ADM1031=m
# CONFIG_SENSORS_ADM1177 is not set
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7X10=m
# CONFIG_SENSORS_ADT7310 is not set
CONFIG_SENSORS_ADT7410=m
CONFIG_SENSORS_ADT7411=m
CONFIG_SENSORS_ADT7462=m
CONFIG_SENSORS_ADT7470=m
CONFIG_SENSORS_ADT7475=m
# CONFIG_SENSORS_AHT10 is not set
# CONFIG_SENSORS_AQUACOMPUTER_D5NEXT is not set
# CONFIG_SENSORS_AS370 is not set
CONFIG_SENSORS_ASC7621=m
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
CONFIG_SENSORS_K8TEMP=m
CONFIG_SENSORS_K10TEMP=m
CONFIG_SENSORS_FAM15H_POWER=m
CONFIG_SENSORS_APPLESMC=m
CONFIG_SENSORS_ASB100=m
CONFIG_SENSORS_ATXP1=m
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_CORSAIR_PSU is not set
# CONFIG_SENSORS_DRIVETEMP is not set
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
# CONFIG_SENSORS_DELL_SMM is not set
CONFIG_SENSORS_I5K_AMB=m
CONFIG_SENSORS_F71805F=m
CONFIG_SENSORS_F71882FG=m
CONFIG_SENSORS_F75375S=m
CONFIG_SENSORS_FSCHMD=m
# CONFIG_SENSORS_FTSTEUTATES is not set
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
CONFIG_SENSORS_G760A=m
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
CONFIG_SENSORS_IBMAEM=m
CONFIG_SENSORS_IBMPEX=m
CONFIG_SENSORS_I5500=m
CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_IT87=m
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_POWR1220 is not set
CONFIG_SENSORS_LINEAGE=m
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
# CONFIG_SENSORS_LTC2992 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LTC4215=m
# CONFIG_SENSORS_LTC4222 is not set
CONFIG_SENSORS_LTC4245=m
# CONFIG_SENSORS_LTC4260 is not set
CONFIG_SENSORS_LTC4261=m
# CONFIG_SENSORS_MAX1111 is not set
# CONFIG_SENSORS_MAX127 is not set
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
CONFIG_SENSORS_MAX1668=m
CONFIG_SENSORS_MAX197=m
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX31760 is not set
# CONFIG_SENSORS_MAX6620 is not set
# CONFIG_SENSORS_MAX6621 is not set
CONFIG_SENSORS_MAX6639=m
CONFIG_SENSORS_MAX6650=m
CONFIG_SENSORS_MAX6697=m
# CONFIG_SENSORS_MAX31790 is not set
CONFIG_SENSORS_MCP3021=m
# CONFIG_SENSORS_MLXREG_FAN is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_TPS23861 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_ADCXX is not set
CONFIG_SENSORS_LM63=m
# CONFIG_SENSORS_LM70 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
CONFIG_SENSORS_LM85=m
CONFIG_SENSORS_LM87=m
CONFIG_SENSORS_LM90=m
CONFIG_SENSORS_LM92=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_LM95234=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_LM95245=m
CONFIG_SENSORS_PC87360=m
CONFIG_SENSORS_PC87427=m
# CONFIG_SENSORS_NCT6683 is not set
CONFIG_SENSORS_NCT6775_CORE=m
CONFIG_SENSORS_NCT6775=m
# CONFIG_SENSORS_NCT6775_I2C is not set
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
# CONFIG_SENSORS_NZXT_KRAKEN2 is not set
# CONFIG_SENSORS_NZXT_SMART2 is not set
CONFIG_SENSORS_PCF8591=m
CONFIG_PMBUS=m
CONFIG_SENSORS_PMBUS=m
# CONFIG_SENSORS_ADM1266 is not set
CONFIG_SENSORS_ADM1275=m
# CONFIG_SENSORS_BEL_PFE is not set
# CONFIG_SENSORS_BPA_RS600 is not set
# CONFIG_SENSORS_DELTA_AHE50DC_FAN is not set
# CONFIG_SENSORS_FSP_3Y is not set
# CONFIG_SENSORS_IBM_CFFPS is not set
# CONFIG_SENSORS_DPS920AB is not set
# CONFIG_SENSORS_INSPUR_IPSPS is not set
# CONFIG_SENSORS_IR35221 is not set
# CONFIG_SENSORS_IR36021 is not set
# CONFIG_SENSORS_IR38064 is not set
# CONFIG_SENSORS_IRPS5401 is not set
# CONFIG_SENSORS_ISL68137 is not set
CONFIG_SENSORS_LM25066=m
# CONFIG_SENSORS_LT7182S is not set
CONFIG_SENSORS_LTC2978=m
# CONFIG_SENSORS_LTC3815 is not set
# CONFIG_SENSORS_MAX15301 is not set
CONFIG_SENSORS_MAX16064=m
# CONFIG_SENSORS_MAX16601 is not set
# CONFIG_SENSORS_MAX20730 is not set
# CONFIG_SENSORS_MAX20751 is not set
# CONFIG_SENSORS_MAX31785 is not set
CONFIG_SENSORS_MAX34440=m
CONFIG_SENSORS_MAX8688=m
# CONFIG_SENSORS_MP2888 is not set
# CONFIG_SENSORS_MP2975 is not set
# CONFIG_SENSORS_MP5023 is not set
# CONFIG_SENSORS_PIM4328 is not set
# CONFIG_SENSORS_PLI1209BC is not set
# CONFIG_SENSORS_PM6764TR is not set
# CONFIG_SENSORS_PXE1610 is not set
# CONFIG_SENSORS_Q54SJ108A2 is not set
# CONFIG_SENSORS_STPDDC60 is not set
# CONFIG_SENSORS_TPS40422 is not set
# CONFIG_SENSORS_TPS53679 is not set
# CONFIG_SENSORS_TPS546D24 is not set
CONFIG_SENSORS_UCD9000=m
CONFIG_SENSORS_UCD9200=m
# CONFIG_SENSORS_XDPE152 is not set
# CONFIG_SENSORS_XDPE122 is not set
CONFIG_SENSORS_ZL6100=m
# CONFIG_SENSORS_SBTSI is not set
# CONFIG_SENSORS_SBRMI is not set
CONFIG_SENSORS_SHT15=m
CONFIG_SENSORS_SHT21=m
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHT4x is not set
# CONFIG_SENSORS_SHTC1 is not set
CONFIG_SENSORS_SIS5595=m
CONFIG_SENSORS_DME1737=m
CONFIG_SENSORS_EMC1403=m
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC2305 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SMSC47B397=m
CONFIG_SENSORS_SCH56XX_COMMON=m
CONFIG_SENSORS_SCH5627=m
CONFIG_SENSORS_SCH5636=m
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_ADC128D818 is not set
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_ADS7871 is not set
CONFIG_SENSORS_AMC6821=m
CONFIG_SENSORS_INA209=m
CONFIG_SENSORS_INA2XX=m
# CONFIG_SENSORS_INA238 is not set
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
CONFIG_SENSORS_THMC50=m
CONFIG_SENSORS_TMP102=m
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
CONFIG_SENSORS_TMP401=m
CONFIG_SENSORS_TMP421=m
# CONFIG_SENSORS_TMP464 is not set
# CONFIG_SENSORS_TMP513 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VIA686A=m
CONFIG_SENSORS_VT1211=m
CONFIG_SENSORS_VT8231=m
# CONFIG_SENSORS_W83773G is not set
CONFIG_SENSORS_W83781D=m
CONFIG_SENSORS_W83791D=m
CONFIG_SENSORS_W83792D=m
CONFIG_SENSORS_W83793=m
CONFIG_SENSORS_W83795=m
# CONFIG_SENSORS_W83795_FANCTRL is not set
CONFIG_SENSORS_W83L785TS=m
CONFIG_SENSORS_W83L786NG=m
CONFIG_SENSORS_W83627HF=m
CONFIG_SENSORS_W83627EHF=m
# CONFIG_SENSORS_XGENE is not set

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=m
CONFIG_SENSORS_ATK0110=m
# CONFIG_SENSORS_ASUS_WMI is not set
# CONFIG_SENSORS_ASUS_EC is not set
CONFIG_THERMAL=y
# CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_BANG_BANG=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=m
CONFIG_X86_THERMAL_VECTOR=y
CONFIG_X86_PKG_TEMP_THERMAL=m
# CONFIG_INTEL_SOC_DTS_THERMAL is not set

#
# ACPI INT340X thermal drivers
#
# CONFIG_INT340X_THERMAL is not set
# end of ACPI INT340X thermal drivers

CONFIG_INTEL_PCH_THERMAL=m
# CONFIG_INTEL_TCC_COOLING is not set
# CONFIG_INTEL_MENLOW is not set
# CONFIG_INTEL_HFI_THERMAL is not set
# end of Intel thermal drivers

CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
CONFIG_WATCHDOG_SYSFS=y
# CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT is not set

#
# Watchdog Pretimeout Governors
#
# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
CONFIG_WDAT_WDT=m
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_MLX_WDT is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
CONFIG_ALIM1535_WDT=m
CONFIG_ALIM7101_WDT=m
# CONFIG_EBC_C384_WDT is not set
# CONFIG_EXAR_WDT is not set
CONFIG_F71808E_WDT=m
# CONFIG_SP5100_TCO is not set
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
CONFIG_IB700_WDT=m
CONFIG_IBMASR=m
# CONFIG_WAFER_WDT is not set
CONFIG_I6300ESB_WDT=y
CONFIG_IE6XX_WDT=m
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=m
CONFIG_IT87_WDT=m
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
CONFIG_NV_TCO=m
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
CONFIG_SMSC_SCH311X_WDT=m
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
CONFIG_VIA_WDT=m
CONFIG_W83627HF_WDT=m
CONFIG_W83877F_WDT=m
CONFIG_W83977F_WDT=m
CONFIG_MACHZ_WDT=m
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_INTEL_MEI_WDT=m
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set

#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=m
CONFIG_WDTPCI=m

#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=m
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
CONFIG_BCMA_HOST_PCI=y
# CONFIG_BCMA_HOST_SOC is not set
CONFIG_BCMA_DRIVER_PCI=y
CONFIG_BCMA_DRIVER_GMAC_CMN=y
CONFIG_BCMA_DRIVER_GPIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_AAT2870_CORE is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_SPI is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_HTC_I2CPLD is not set
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=y
CONFIG_LPC_SCH=m
CONFIG_MFD_INTEL_LPSS=y
CONFIG_MFD_INTEL_LPSS_ACPI=y
CONFIG_MFD_INTEL_LPSS_PCI=y
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6370 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_MFD_OCELOT is not set
# CONFIG_EZX_PCAP is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_SY7636A is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT4831 is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RT5120 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SI476X_CORE is not set
CONFIG_MFD_SM501=m
CONFIG_MFD_SM501_GPIO=y
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS65912_SPI is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TQMX86 is not set
CONFIG_MFD_VX855=m
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM831X_SPI is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_ATC260X_I2C is not set
# CONFIG_MFD_INTEL_M10_BMC is not set
# end of Multifunction device drivers

# CONFIG_REGULATOR is not set
CONFIG_RC_CORE=m
CONFIG_LIRC=y
CONFIG_RC_MAP=m
CONFIG_RC_DECODERS=y
CONFIG_IR_IMON_DECODER=m
CONFIG_IR_JVC_DECODER=m
CONFIG_IR_MCE_KBD_DECODER=m
CONFIG_IR_NEC_DECODER=m
CONFIG_IR_RC5_DECODER=m
CONFIG_IR_RC6_DECODER=m
# CONFIG_IR_RCMM_DECODER is not set
CONFIG_IR_SANYO_DECODER=m
# CONFIG_IR_SHARP_DECODER is not set
CONFIG_IR_SONY_DECODER=m
# CONFIG_IR_XMP_DECODER is not set
CONFIG_RC_DEVICES=y
CONFIG_IR_ENE=m
CONFIG_IR_FINTEK=m
# CONFIG_IR_IGORPLUGUSB is not set
# CONFIG_IR_IGUANA is not set
# CONFIG_IR_IMON is not set
# CONFIG_IR_IMON_RAW is not set
CONFIG_IR_ITE_CIR=m
# CONFIG_IR_MCEUSB is not set
CONFIG_IR_NUVOTON=m
# CONFIG_IR_REDRAT3 is not set
CONFIG_IR_SERIAL=m
CONFIG_IR_SERIAL_TRANSMITTER=y
# CONFIG_IR_STREAMZAP is not set
# CONFIG_IR_TOY is not set
# CONFIG_IR_TTUSBIR is not set
CONFIG_IR_WINBOND_CIR=m
# CONFIG_RC_ATI_REMOTE is not set
# CONFIG_RC_LOOPBACK is not set
# CONFIG_RC_XBOX_DVD is not set

#
# CEC support
#
# CONFIG_MEDIA_CEC_SUPPORT is not set
# end of CEC support

CONFIG_MEDIA_SUPPORT=m
CONFIG_MEDIA_SUPPORT_FILTER=y
CONFIG_MEDIA_SUBDRV_AUTOSELECT=y

#
# Media device types
#
# CONFIG_MEDIA_CAMERA_SUPPORT is not set
# CONFIG_MEDIA_ANALOG_TV_SUPPORT is not set
# CONFIG_MEDIA_DIGITAL_TV_SUPPORT is not set
# CONFIG_MEDIA_RADIO_SUPPORT is not set
# CONFIG_MEDIA_SDR_SUPPORT is not set
# CONFIG_MEDIA_PLATFORM_SUPPORT is not set
# CONFIG_MEDIA_TEST_SUPPORT is not set
# end of Media device types

#
# Media drivers
#

#
# Drivers filtered as selected at 'Filter media drivers'
#

#
# Media drivers
#
# CONFIG_MEDIA_USB_SUPPORT is not set
# CONFIG_MEDIA_PCI_SUPPORT is not set
# end of Media drivers

CONFIG_MEDIA_HIDE_ANCILLARY_SUBDRV=y

#
# Media ancillary drivers
#
# end of Media ancillary drivers

#
# Graphics support
#
CONFIG_APERTURE_HELPERS=y
# CONFIG_AGP is not set
CONFIG_INTEL_GTT=m
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=m
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_USE_DYNAMIC_DEBUG=y
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
CONFIG_DRM_DISPLAY_HELPER=m
CONFIG_DRM_DISPLAY_DP_HELPER=y
CONFIG_DRM_DISPLAY_HDCP_HELPER=y
CONFIG_DRM_DISPLAY_HDMI_HELPER=y
CONFIG_DRM_DP_AUX_CHARDEV=y
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_BUDDY=m
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=m

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
# CONFIG_DRM_I915_GVT_KVMGT is not set
CONFIG_DRM_I915_REQUEST_TIMEOUT=20000
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# CONFIG_DRM_VGEM is not set
# CONFIG_DRM_VKMS is not set
# CONFIG_DRM_VMWGFX is not set
CONFIG_DRM_GMA500=m
# CONFIG_DRM_UDL is not set
CONFIG_DRM_AST=m
# CONFIG_DRM_MGAG200 is not set
CONFIG_DRM_QXL=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_PANEL=y

#
# Display Panels
#
# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
# CONFIG_DRM_PANEL_WIDECHIPS_WS2401 is not set
# end of Display Panels

CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y

#
# Display Interface Bridges
#
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges

# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_BOCHS=m
CONFIG_DRM_CIRRUS_QEMU=m
# CONFIG_DRM_GM12U320 is not set
# CONFIG_DRM_PANEL_MIPI_DBI is not set
# CONFIG_DRM_SIMPLEDRM is not set
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9163 is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
CONFIG_DRM_NOMODESET=y
CONFIG_DRM_PRIVACY_SCREEN=y

#
# Frame buffer Devices
#
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_MODE_HELPERS is not set
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SM501 is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SSD1307 is not set
# CONFIG_FB_SM712 is not set
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
# CONFIG_LCD_L4F00242T03 is not set
# CONFIG_LCD_LMS283GF05 is not set
# CONFIG_LCD_LTV350QV is not set
# CONFIG_LCD_ILI922X is not set
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_TDO24M is not set
# CONFIG_LCD_VGG2432A4 is not set
CONFIG_LCD_PLATFORM=m
# CONFIG_LCD_AMS369FG06 is not set
# CONFIG_LCD_LMS501KF03 is not set
# CONFIG_LCD_HX8357 is not set
# CONFIG_LCD_OTM3225A is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_KTD253 is not set
# CONFIG_BACKLIGHT_PWM is not set
CONFIG_BACKLIGHT_APPLE=m
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
CONFIG_BACKLIGHT_LP855X=m
# CONFIG_BACKLIGHT_GPIO is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
# end of Backlight & LCD device support

CONFIG_HDMI=y

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION is not set
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support

CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_LOGO_LINUX_CLUT224=y
# end of Graphics support

# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACCUTOUCH is not set
CONFIG_HID_ACRUX=m
# CONFIG_HID_ACRUX_FF is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
CONFIG_HID_ASUS=m
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=m
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
CONFIG_HID_CHERRY=m
# CONFIG_HID_CHICONY is not set
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
CONFIG_HID_CMEDIA=m
# CONFIG_HID_CP2112 is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
CONFIG_HID_CYPRESS=m
CONFIG_HID_DRAGONRISE=m
# CONFIG_DRAGONRISE_FF is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
CONFIG_HID_ELECOM=m
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_FT260 is not set
CONFIG_HID_GEMBIRD=m
CONFIG_HID_GFRM=m
# CONFIG_HID_GLORIOUS is not set
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_VIVALDI is not set
# CONFIG_HID_GT683R is not set
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
# CONFIG_HID_UCLOGIC is not set
CONFIG_HID_WALTOP=m
# CONFIG_HID_VIEWSONIC is not set
# CONFIG_HID_VRC2 is not set
# CONFIG_HID_XIAOMI is not set
CONFIG_HID_GYRATION=m
CONFIG_HID_ICADE=m
CONFIG_HID_ITE=m
CONFIG_HID_JABRA=m
CONFIG_HID_TWINHAN=m
CONFIG_HID_KENSINGTON=m
CONFIG_HID_LCPOWER=m
CONFIG_HID_LED=m
CONFIG_HID_LENOVO=m
# CONFIG_HID_LETSKETCH is not set
CONFIG_HID_LOGITECH=m
CONFIG_HID_LOGITECH_DJ=m
CONFIG_HID_LOGITECH_HIDPP=m
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
# CONFIG_HID_MEGAWORLD_FF is not set
# CONFIG_HID_REDRAGON is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
CONFIG_HID_MULTITOUCH=m
# CONFIG_HID_NINTENDO is not set
CONFIG_HID_NTI=m
# CONFIG_HID_NTRIG is not set
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
# CONFIG_PANTHERLORD_FF is not set
# CONFIG_HID_PENMOUNT is not set
CONFIG_HID_PETALYNX=m
CONFIG_HID_PICOLCD=m
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=m
# CONFIG_HID_PXRC is not set
# CONFIG_HID_RAZER is not set
CONFIG_HID_PRIMAX=m
# CONFIG_HID_RETRODE is not set
# CONFIG_HID_ROCCAT is not set
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
# CONFIG_HID_SEMITEK is not set
# CONFIG_HID_SIGMAMICRO is not set
# CONFIG_HID_SONY is not set
CONFIG_HID_SPEEDLINK=m
# CONFIG_HID_STEAM is not set
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_RMI=m
CONFIG_HID_GREENASIA=m
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_SMARTJOYPLUS=m
# CONFIG_SMARTJOYPLUS_FF is not set
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
# CONFIG_HID_TOPRE is not set
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
# CONFIG_THRUSTMASTER_FF is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
# CONFIG_HID_WACOM is not set
CONFIG_HID_WIIMOTE=m
CONFIG_HID_XINMO=m
CONFIG_HID_ZEROPLUS=m
# CONFIG_ZEROPLUS_FF is not set
CONFIG_HID_ZYDACRON=m
CONFIG_HID_SENSOR_HUB=y
CONFIG_HID_SENSOR_CUSTOM_SENSOR=m
CONFIG_HID_ALPS=m
# CONFIG_HID_MCP2221 is not set
# end of Special HID drivers

#
# USB HID support
#
CONFIG_USB_HID=y
# CONFIG_HID_PID is not set
# CONFIG_USB_HIDDEV is not set
# end of USB HID support

#
# I2C HID support
#
# CONFIG_I2C_HID_ACPI is not set
# end of I2C HID support

#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=m
# CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set
# end of Intel ISH HID support

#
# AMD SFH HID Support
#
# CONFIG_AMD_SFH_HID is not set
# end of AMD SFH HID Support
# end of HID support

CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
CONFIG_USB_LEDS_TRIGGER_USBPORT=y
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_MON=y

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
CONFIG_USB_XHCI_HCD=y
# CONFIG_USB_XHCI_DBGCAP is not set
CONFIG_USB_XHCI_PCI=y
# CONFIG_USB_XHCI_PCI_RENESAS is not set
# CONFIG_USB_XHCI_PLATFORM is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_FSL is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
# CONFIG_USB_MAX3421_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_BCMA is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
# CONFIG_USB_UAS is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_USB_CDNS_SUPPORT is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set

#
# USB port drivers
#
# CONFIG_USB_USS720 is not set
CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_GENERIC=y
# CONFIG_USB_SERIAL_SIMPLE is not set
# CONFIG_USB_SERIAL_AIRCABLE is not set
# CONFIG_USB_SERIAL_ARK3116 is not set
# CONFIG_USB_SERIAL_BELKIN is not set
# CONFIG_USB_SERIAL_CH341 is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_CP210X is not set
# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
# CONFIG_USB_SERIAL_EMPEG is not set
# CONFIG_USB_SERIAL_FTDI_SIO is not set
# CONFIG_USB_SERIAL_VISOR is not set
# CONFIG_USB_SERIAL_IPAQ is not set
# CONFIG_USB_SERIAL_IR is not set
# CONFIG_USB_SERIAL_EDGEPORT is not set
# CONFIG_USB_SERIAL_EDGEPORT_TI is not set
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
# CONFIG_USB_SERIAL_GARMIN is not set
# CONFIG_USB_SERIAL_IPW is not set
# CONFIG_USB_SERIAL_IUU is not set
# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
# CONFIG_USB_SERIAL_KEYSPAN is not set
# CONFIG_USB_SERIAL_KLSI is not set
# CONFIG_USB_SERIAL_KOBIL_SCT is not set
# CONFIG_USB_SERIAL_MCT_U232 is not set
# CONFIG_USB_SERIAL_METRO is not set
# CONFIG_USB_SERIAL_MOS7720 is not set
# CONFIG_USB_SERIAL_MOS7840 is not set
# CONFIG_USB_SERIAL_MXUPORT is not set
# CONFIG_USB_SERIAL_NAVMAN is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_OTI6858 is not set
# CONFIG_USB_SERIAL_QCAUX is not set
# CONFIG_USB_SERIAL_QUALCOMM is not set
# CONFIG_USB_SERIAL_SPCP8X5 is not set
# CONFIG_USB_SERIAL_SAFE is not set
# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
# CONFIG_USB_SERIAL_SYMBOL is not set
# CONFIG_USB_SERIAL_TI is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_OPTION is not set
# CONFIG_USB_SERIAL_OMNINET is not set
# CONFIG_USB_SERIAL_OPTICON is not set
# CONFIG_USB_SERIAL_XSENS_MT is not set
# CONFIG_USB_SERIAL_WISHBONE is not set
# CONFIG_USB_SERIAL_SSU100 is not set
# CONFIG_USB_SERIAL_QT2 is not set
# CONFIG_USB_SERIAL_UPD78F0730 is not set
# CONFIG_USB_SERIAL_XR is not set
CONFIG_USB_SERIAL_DEBUG=m

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HUB_USB251XB is not set
# CONFIG_USB_HSIC_USB3503 is not set
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
# CONFIG_USB_ATM is not set

#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers

# CONFIG_USB_GADGET is not set
CONFIG_TYPEC=y
# CONFIG_TYPEC_TCPM is not set
CONFIG_TYPEC_UCSI=y
# CONFIG_UCSI_CCG is not set
CONFIG_UCSI_ACPI=y
# CONFIG_UCSI_STM32G0 is not set
# CONFIG_TYPEC_TPS6598X is not set
# CONFIG_TYPEC_RT1719 is not set
# CONFIG_TYPEC_STUSB160X is not set
# CONFIG_TYPEC_WUSB3801 is not set

#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
# CONFIG_TYPEC_MUX_FSA4480 is not set
# CONFIG_TYPEC_MUX_PI3USB30532 is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support

#
# USB Type-C Alternate Mode drivers
#
# CONFIG_TYPEC_DP_ALTMODE is not set
# end of USB Type-C Alternate Mode drivers

# CONFIG_USB_ROLE_SWITCH is not set
CONFIG_MMC=m
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_IO_ACCESSORS=y
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_SDHCI_ACPI=m
CONFIG_MMC_SDHCI_PLTFM=m
# CONFIG_MMC_SDHCI_F_SDH30 is not set
# CONFIG_MMC_WBSD is not set
# CONFIG_MMC_TIFM_SD is not set
# CONFIG_MMC_SPI is not set
# CONFIG_MMC_CB710 is not set
# CONFIG_MMC_VIA_SDMMC is not set
# CONFIG_MMC_VUB300 is not set
# CONFIG_MMC_USHC is not set
# CONFIG_MMC_USDHI6ROL0 is not set
# CONFIG_MMC_REALTEK_PCI is not set
CONFIG_MMC_CQHCI=m
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_MMC_SDHCI_XENON is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set

#
# LED drivers
#
# CONFIG_LEDS_APU is not set
CONFIG_LEDS_LM3530=m
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
CONFIG_LEDS_LP3944=m
# CONFIG_LEDS_LP3952 is not set
# CONFIG_LEDS_LP50XX is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_PWM is not set
# CONFIG_LEDS_BD2802 is not set
CONFIG_LEDS_INTEL_SS4200=m
CONFIG_LEDS_LT3593=m
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_IS31FL319X is not set

#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
CONFIG_LEDS_BLINKM=m
CONFIG_LEDS_MLXCPLD=m
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_TI_LMU_COMMON is not set

#
# Flash and Torch LED drivers
#

#
# RGB LED drivers
#

#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=m
CONFIG_LEDS_TRIGGER_ONESHOT=m
# CONFIG_LEDS_TRIGGER_DISK is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_GPIO=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
CONFIG_LEDS_TRIGGER_CAMERA=m
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
CONFIG_LEDS_TRIGGER_AUDIO=m
# CONFIG_LEDS_TRIGGER_TTY is not set

#
# Simple LED drivers
#
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_GHES=y
CONFIG_EDAC_E752X=m
CONFIG_EDAC_I82975X=m
CONFIG_EDAC_I3000=m
CONFIG_EDAC_I3200=m
CONFIG_EDAC_IE31200=m
CONFIG_EDAC_X38=m
CONFIG_EDAC_I5400=m
CONFIG_EDAC_I7CORE=m
CONFIG_EDAC_I5000=m
CONFIG_EDAC_I5100=m
CONFIG_EDAC_I7300=m
CONFIG_EDAC_SBRIDGE=m
CONFIG_EDAC_SKX=m
# CONFIG_EDAC_I10NM is not set
CONFIG_EDAC_PND2=m
# CONFIG_EDAC_IGEN6 is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_MC146818_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_SYSTOHC is not set
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_NVMEM=y

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
CONFIG_RTC_DRV_DS1307=m
# CONFIG_RTC_DRV_DS1307_CENTURY is not set
CONFIG_RTC_DRV_DS1374=m
# CONFIG_RTC_DRV_DS1374_WDT is not set
CONFIG_RTC_DRV_DS1672=m
CONFIG_RTC_DRV_MAX6900=m
CONFIG_RTC_DRV_RS5C372=m
CONFIG_RTC_DRV_ISL1208=m
CONFIG_RTC_DRV_ISL12022=m
CONFIG_RTC_DRV_X1205=m
CONFIG_RTC_DRV_PCF8523=m
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
CONFIG_RTC_DRV_PCF8563=m
CONFIG_RTC_DRV_PCF8583=m
CONFIG_RTC_DRV_M41T80=m
CONFIG_RTC_DRV_M41T80_WDT=y
CONFIG_RTC_DRV_BQ32K=m
# CONFIG_RTC_DRV_S35390A is not set
CONFIG_RTC_DRV_FM3130=m
# CONFIG_RTC_DRV_RX8010 is not set
CONFIG_RTC_DRV_RX8581=m
CONFIG_RTC_DRV_RX8025=m
CONFIG_RTC_DRV_EM3027=m
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set

#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
CONFIG_RTC_DRV_RX4581=m
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=y

#
# SPI and I2C RTC drivers
#
CONFIG_RTC_DRV_DS3232=m
CONFIG_RTC_DRV_DS3232_HWMON=y
# CONFIG_RTC_DRV_PCF2127 is not set
CONFIG_RTC_DRV_RV3029C2=m
# CONFIG_RTC_DRV_RV3029_HWMON is not set
# CONFIG_RTC_DRV_RX6110 is not set

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
CONFIG_RTC_DRV_DS1286=m
CONFIG_RTC_DRV_DS1511=m
CONFIG_RTC_DRV_DS1553=m
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
CONFIG_RTC_DRV_DS1742=m
CONFIG_RTC_DRV_DS2404=m
CONFIG_RTC_DRV_STK17TA8=m
# CONFIG_RTC_DRV_M48T86 is not set
CONFIG_RTC_DRV_M48T35=m
CONFIG_RTC_DRV_M48T59=m
CONFIG_RTC_DRV_MSM6242=m
CONFIG_RTC_DRV_BQ4802=m
CONFIG_RTC_DRV_RP5C01=m
CONFIG_RTC_DRV_V3020=m

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_FTRTC010 is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_GOLDFISH is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
CONFIG_DMA_ENGINE=y
CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
CONFIG_INTEL_IDMA64=m
# CONFIG_INTEL_IDXD is not set
# CONFIG_INTEL_IDXD_COMPAT is not set
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_AMD_PTDMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=y
CONFIG_DW_DMAC=m
CONFIG_DW_DMAC_PCI=y
# CONFIG_DW_EDMA is not set
# CONFIG_DW_EDMA_PCIE is not set
CONFIG_HSU_DMA=y
# CONFIG_SF_PDMA is not set
# CONFIG_INTEL_LDMA is not set

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
CONFIG_DMATEST=m
CONFIG_DMA_ENGINE_RAID=y

#
# DMABUF options
#
CONFIG_SYNC_FILE=y
# CONFIG_SW_SYNC is not set
# CONFIG_UDMABUF is not set
# CONFIG_DMABUF_MOVE_NOTIFY is not set
# CONFIG_DMABUF_DEBUG is not set
# CONFIG_DMABUF_SELFTESTS is not set
# CONFIG_DMABUF_HEAPS is not set
# CONFIG_DMABUF_SYSFS_STATS is not set
# end of DMABUF options

CONFIG_DCA=m
# CONFIG_AUXDISPLAY is not set
# CONFIG_PANEL is not set
CONFIG_UIO=m
CONFIG_UIO_CIF=m
CONFIG_UIO_PDRV_GENIRQ=m
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_PRUSS is not set
# CONFIG_UIO_MF624 is not set
CONFIG_VFIO=m
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO_VIRQFD=m
CONFIG_VFIO_NOIOMMU=y
CONFIG_VFIO_PCI_CORE=m
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
CONFIG_VFIO_PCI=m
# CONFIG_VFIO_PCI_VGA is not set
# CONFIG_VFIO_PCI_IGD is not set
CONFIG_VFIO_MDEV=m
CONFIG_IRQ_BYPASS_MANAGER=m
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_PMEM is not set
CONFIG_VIRTIO_BALLOON=m
# CONFIG_VIRTIO_MEM is not set
CONFIG_VIRTIO_INPUT=m
# CONFIG_VIRTIO_MMIO is not set
CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_VSOCK=m
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set
# end of Microsoft Hyper-V guest support

# CONFIG_GREYBUS is not set
# CONFIG_COMEDI is not set
# CONFIG_STAGING is not set
# CONFIG_CHROME_PLATFORMS is not set
CONFIG_MELLANOX_PLATFORM=y
CONFIG_MLXREG_HOTPLUG=m
# CONFIG_MLXREG_IO is not set
# CONFIG_MLXREG_LC is not set
# CONFIG_NVSW_SN2201 is not set
CONFIG_SURFACE_PLATFORMS=y
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_GPE is not set
# CONFIG_SURFACE_HOTPLUG is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_ACPI_WMI=m
CONFIG_WMI_BMOF=m
# CONFIG_HUAWEI_WMI is not set
# CONFIG_UV_SYSFS is not set
CONFIG_MXM_WMI=m
# CONFIG_PEAQ_WMI is not set
# CONFIG_NVIDIA_WMI_EC_BACKLIGHT is not set
# CONFIG_XIAOMI_WMI is not set
# CONFIG_GIGABYTE_WMI is not set
# CONFIG_YOGABOOK_WMI is not set
CONFIG_ACERHDF=m
# CONFIG_ACER_WIRELESS is not set
CONFIG_ACER_WMI=m
# CONFIG_AMD_PMF is not set
# CONFIG_AMD_PMC is not set
# CONFIG_AMD_HSMP is not set
# CONFIG_ADV_SWBUTTON is not set
CONFIG_APPLE_GMUX=m
CONFIG_ASUS_LAPTOP=m
# CONFIG_ASUS_WIRELESS is not set
CONFIG_ASUS_WMI=m
CONFIG_ASUS_NB_WMI=m
# CONFIG_ASUS_TF103C_DOCK is not set
# CONFIG_MERAKI_MX100 is not set
CONFIG_EEEPC_LAPTOP=m
CONFIG_EEEPC_WMI=m
# CONFIG_X86_PLATFORM_DRIVERS_DELL is not set
CONFIG_AMILO_RFKILL=m
CONFIG_FUJITSU_LAPTOP=m
CONFIG_FUJITSU_TABLET=m
# CONFIG_GPD_POCKET_FAN is not set
CONFIG_HP_ACCEL=m
# CONFIG_WIRELESS_HOTKEY is not set
CONFIG_HP_WMI=m
# CONFIG_IBM_RTL is not set
CONFIG_IDEAPAD_LAPTOP=m
CONFIG_SENSORS_HDAPS=m
CONFIG_THINKPAD_ACPI=m
# CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set
# CONFIG_THINKPAD_ACPI_DEBUG is not set
# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
CONFIG_THINKPAD_ACPI_VIDEO=y
CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
# CONFIG_THINKPAD_LMI is not set
# CONFIG_INTEL_ATOMISP2_PM is not set
# CONFIG_INTEL_SAR_INT1092 is not set
CONFIG_INTEL_PMC_CORE=m

#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support

CONFIG_INTEL_WMI=y
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
CONFIG_INTEL_WMI_THUNDERBOLT=m

#
# Intel Uncore Frequency Control
#
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
# end of Intel Uncore Frequency Control

CONFIG_INTEL_HID_EVENT=m
CONFIG_INTEL_VBTN=m
# CONFIG_INTEL_INT0002_VGPIO is not set
CONFIG_INTEL_OAKTRAIL=m
# CONFIG_INTEL_ISHTP_ECLITE is not set
# CONFIG_INTEL_PUNIT_IPC is not set
CONFIG_INTEL_RST=m
# CONFIG_INTEL_SMARTCONNECT is not set
CONFIG_INTEL_TURBO_MAX_3=y
# CONFIG_INTEL_VSEC is not set
CONFIG_MSI_LAPTOP=m
CONFIG_MSI_WMI=m
# CONFIG_PCENGINES_APU2 is not set
# CONFIG_BARCO_P50_GPIO is not set
CONFIG_SAMSUNG_LAPTOP=m
CONFIG_SAMSUNG_Q10=m
CONFIG_TOSHIBA_BT_RFKILL=m
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
CONFIG_ACPI_CMPC=m
CONFIG_COMPAL_LAPTOP=m
# CONFIG_LG_LAPTOP is not set
CONFIG_PANASONIC_LAPTOP=m
CONFIG_SONY_LAPTOP=m
CONFIG_SONYPI_COMPAT=y
# CONFIG_SYSTEM76_ACPI is not set
CONFIG_TOPSTAR_LAPTOP=m
# CONFIG_SERIAL_MULTI_INSTANTIATE is not set
CONFIG_MLX_PLATFORM=m
CONFIG_INTEL_IPS=m
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
# CONFIG_SIEMENS_SIMATIC_IPC is not set
# CONFIG_WINMATE_FM07_KEYS is not set
CONFIG_P2SB=y
CONFIG_HAVE_CLK=y
CONFIG_HAVE_CLK_PREPARE=y
CONFIG_COMMON_CLK=y
# CONFIG_LMK04832 is not set
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_COMMON_CLK_PWM is not set
# CONFIG_XILINX_VCU is not set
CONFIG_HWSPINLOCK=y

#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# end of Clock Source drivers

CONFIG_MAILBOX=y
CONFIG_PCC=y
# CONFIG_ALTERA_MBOX is not set
CONFIG_IOMMU_IOVA=y
CONFIG_IOASID=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y

#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support

# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_DMA_STRICT is not set
CONFIG_IOMMU_DEFAULT_DMA_LAZY=y
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_DMA=y
CONFIG_IOMMU_SVA=y
# CONFIG_AMD_IOMMU is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
CONFIG_INTEL_IOMMU_SVM=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON=y
CONFIG_IRQ_REMAP=y
# CONFIG_VIRTIO_IOMMU is not set

#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers

#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers

# CONFIG_SOUNDWIRE is not set

#
# SOC (System On Chip) specific Drivers
#

#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers

#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers

#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers

#
# fujitsu SoC drivers
#
# end of fujitsu SoC drivers

#
# i.MX SoC drivers
#
# end of i.MX SoC drivers

#
# Enable LiteX SoC Builder specific drivers
#
# end of Enable LiteX SoC Builder specific drivers

#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers

# CONFIG_SOC_TI is not set

#
# Xilinx SoC drivers
#
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers

# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
CONFIG_NTB=m
# CONFIG_NTB_MSI is not set
# CONFIG_NTB_AMD is not set
# CONFIG_NTB_IDT is not set
# CONFIG_NTB_INTEL is not set
# CONFIG_NTB_EPF is not set
# CONFIG_NTB_SWITCHTEC is not set
# CONFIG_NTB_PINGPONG is not set
# CONFIG_NTB_TOOL is not set
# CONFIG_NTB_PERF is not set
# CONFIG_NTB_TRANSPORT is not set
CONFIG_PWM=y
CONFIG_PWM_SYSFS=y
# CONFIG_PWM_DEBUG is not set
# CONFIG_PWM_CLK is not set
# CONFIG_PWM_DWC is not set
CONFIG_PWM_LPSS=m
CONFIG_PWM_LPSS_PCI=m
CONFIG_PWM_LPSS_PLATFORM=m
# CONFIG_PWM_PCA9685 is not set

#
# IRQ chip support
#
# end of IRQ chip support

# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_USB_LGM_PHY is not set
# CONFIG_PHY_CAN_TRANSCEIVER is not set

#
# PHY drivers for Broadcom platforms
#
# CONFIG_BCM_KONA_USB2_PHY is not set
# end of PHY drivers for Broadcom platforms

# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_INTEL_LGM_EMMC is not set
# end of PHY Subsystem

CONFIG_POWERCAP=y
CONFIG_INTEL_RAPL_CORE=m
CONFIG_INTEL_RAPL=m
# CONFIG_IDLE_INJECT is not set
# CONFIG_MCB is not set

#
# Performance monitor support
#
# end of Performance monitor support

CONFIG_RAS=y
# CONFIG_RAS_CEC is not set
# CONFIG_USB4 is not set

#
# Android
#
# CONFIG_ANDROID_BINDER_IPC is not set
# end of Android

CONFIG_LIBNVDIMM=m
CONFIG_BLK_DEV_PMEM=m
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=m
CONFIG_BTT=y
CONFIG_ND_PFN=m
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_NVDIMM_KEYS=y
CONFIG_DAX=y
CONFIG_DEV_DAX=m
CONFIG_DEV_DAX_PMEM=m
CONFIG_DEV_DAX_KMEM=m
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y
# CONFIG_NVMEM_RMEM is not set

#
# HW tracing support
#
CONFIG_STM=m
# CONFIG_STM_PROTO_BASIC is not set
# CONFIG_STM_PROTO_SYS_T is not set
CONFIG_STM_DUMMY=m
CONFIG_STM_SOURCE_CONSOLE=m
CONFIG_STM_SOURCE_HEARTBEAT=m
CONFIG_STM_SOURCE_FTRACE=m
CONFIG_INTEL_TH=m
CONFIG_INTEL_TH_PCI=m
CONFIG_INTEL_TH_ACPI=m
CONFIG_INTEL_TH_GTH=m
CONFIG_INTEL_TH_STH=m
CONFIG_INTEL_TH_MSU=m
CONFIG_INTEL_TH_PTI=m
# CONFIG_INTEL_TH_DEBUG is not set
# end of HW tracing support

# CONFIG_FPGA is not set
# CONFIG_TEE is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# CONFIG_PECI is not set
# CONFIG_HTE is not set
# end of Device Drivers

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y
CONFIG_EXT2_FS=m
# CONFIG_EXT2_FS_XATTR is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
CONFIG_XFS_SUPPORT_V4=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
CONFIG_XFS_ONLINE_SCRUB=y
# CONFIG_XFS_ONLINE_REPAIR is not set
CONFIG_XFS_DEBUG=y
CONFIG_XFS_ASSERT_FATAL=y
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=m
CONFIG_OCFS2_FS_O2CB=m
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
CONFIG_OCFS2_FS_STATS=y
CONFIG_OCFS2_DEBUG_MASKLOG=y
# CONFIG_OCFS2_DEBUG_FS is not set
CONFIG_BTRFS_FS=m
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
CONFIG_F2FS_FS=m
CONFIG_F2FS_STAT_FS=y
CONFIG_F2FS_FS_XATTR=y
CONFIG_F2FS_FS_POSIX_ACL=y
# CONFIG_F2FS_FS_SECURITY is not set
# CONFIG_F2FS_CHECK_FS is not set
# CONFIG_F2FS_FAULT_INJECTION is not set
# CONFIG_F2FS_FS_COMPRESSION is not set
CONFIG_F2FS_IOSTAT=y
# CONFIG_F2FS_UNFAIR_RWSEM is not set
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FILE_LOCKING=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_ENCRYPTION_ALGS=y
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_PRINT_QUOTA_WARNING=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_AUTOFS4_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
CONFIG_CUSE=m
# CONFIG_VIRTIO_FS is not set
CONFIG_OVERLAY_FS=m
# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set
# CONFIG_OVERLAY_FS_INDEX is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set

#
# Caches
#
CONFIG_NETFS_SUPPORT=m
CONFIG_NETFS_STATS=y
CONFIG_FSCACHE=m
CONFIG_FSCACHE_STATS=y
# CONFIG_FSCACHE_DEBUG is not set
CONFIG_CACHEFILES=m
# CONFIG_CACHEFILES_DEBUG is not set
# CONFIG_CACHEFILES_ERROR_INJECTION is not set
# CONFIG_CACHEFILES_ONDEMAND is not set
# end of Caches

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems

#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
# CONFIG_EXFAT_FS is not set
# CONFIG_NTFS_FS is not set
# CONFIG_NTFS3_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_VMCORE_DEVICE_DUMP=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_PROC_CHILDREN=y
CONFIG_PROC_PID_ARCH_STATUS=y
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_INODE64 is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
CONFIG_MEMFD_CREATE=y
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y
# end of Pseudo filesystems

CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
CONFIG_CRAMFS=m
CONFIG_CRAMFS_BLOCKDEV=y
CONFIG_SQUASHFS=m
# CONFIG_SQUASHFS_FILE_CACHE is not set
CONFIG_SQUASHFS_FILE_DIRECT=y
# CONFIG_SQUASHFS_DECOMP_SINGLE is not set
# CONFIG_SQUASHFS_DECOMP_MULTI is not set
CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU=y
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZ4 is not set
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_ZSTD is not set
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
CONFIG_PSTORE_DEFAULT_KMSG_BYTES=10240
CONFIG_PSTORE_DEFLATE_COMPRESS=y
# CONFIG_PSTORE_LZO_COMPRESS is not set
# CONFIG_PSTORE_LZ4_COMPRESS is not set
# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
# CONFIG_PSTORE_842_COMPRESS is not set
# CONFIG_PSTORE_ZSTD_COMPRESS is not set
CONFIG_PSTORE_COMPRESS=y
CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_PMSG is not set
# CONFIG_PSTORE_FTRACE is not set
CONFIG_PSTORE_RAM=m
# CONFIG_PSTORE_BLK is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
# CONFIG_NFS_V2 is not set
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
# CONFIG_NFS_V4_2_READ_PLUS is not set
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_PNFS=y
# CONFIG_NFSD_BLOCKLAYOUT is not set
CONFIG_NFSD_SCSILAYOUT=y
# CONFIG_NFSD_FLEXFILELAYOUT is not set
# CONFIG_NFSD_V4_2_INTER_SSC is not set
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_GRACE_PERIOD=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_NFS_V4_2_SSC_HELPER=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=m
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
CONFIG_SUNRPC_DEBUG=y
CONFIG_CEPH_FS=m
# CONFIG_CEPH_FSCACHE is not set
CONFIG_CEPH_FS_POSIX_ACL=y
# CONFIG_CEPH_FS_SECURITY_LABEL is not set
CONFIG_CIFS=m
CONFIG_CIFS_STATS2=y
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
# CONFIG_CIFS_SWN_UPCALL is not set
# CONFIG_CIFS_FSCACHE is not set
# CONFIG_SMB_SERVER is not set
CONFIG_SMBFS_COMMON=m
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_MAC_ROMAN=m
CONFIG_NLS_MAC_CELTIC=m
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
CONFIG_NLS_MAC_CYRILLIC=m
CONFIG_NLS_MAC_GAELIC=m
CONFIG_NLS_MAC_GREEK=m
CONFIG_NLS_MAC_ICELAND=m
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
CONFIG_NLS_MAC_TURKISH=m
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
# CONFIG_DLM_DEPRECATED_API is not set
CONFIG_DLM_DEBUG=y
# CONFIG_UNICODE is not set
CONFIG_IO_WQ=y
# end of File systems

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_KEYS_REQUEST_CACHE is not set
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_TRUSTED_KEYS=y
CONFIG_TRUSTED_KEYS_TPM=y
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_USER_DECRYPTED_DATA is not set
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_NETWORK_XFRM=y
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
# CONFIG_STATIC_USERMODEHELPER is not set
# CONFIG_SECURITY_SELINUX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
# CONFIG_SECURITY_LANDLOCK is not set
CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
CONFIG_INTEGRITY_AUDIT=y
# CONFIG_IMA is not set
# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf"

#
# Kernel hardening options
#

#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=y
# CONFIG_GCC_PLUGIN_STRUCTLEAK_USER is not set
# CONFIG_GCC_PLUGIN_STACKLEAK is not set
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y
# CONFIG_ZERO_CALL_USED_REGS is not set
# end of Memory initialization

CONFIG_RANDSTRUCT_NONE=y
# CONFIG_RANDSTRUCT_FULL is not set
# CONFIG_RANDSTRUCT_PERFORMANCE is not set
# end of Kernel hardening options
# end of Security options

CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SKCIPHER=y
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=y
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=m
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_NULL2=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=m
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_SIMD=y
# end of Crypto core or helper

#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=m
# CONFIG_CRYPTO_DH_RFC7919_GROUPS is not set
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECDSA is not set
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_SM2 is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# end of Public-key cryptography

#
# Block ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_TI is not set
CONFIG_CRYPTO_ANUBIS=m
# CONFIG_CRYPTO_ARIA is not set
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_DES=m
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
# CONFIG_CRYPTO_SM4_GENERIC is not set
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=m
# end of Block ciphers

#
# Length-preserving ciphers and modes
#
# CONFIG_CRYPTO_ADIANTUM is not set
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CFB=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=m
CONFIG_CRYPTO_ECB=y
# CONFIG_CRYPTO_HCTR2 is not set
# CONFIG_CRYPTO_KEYWRAP is not set
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=m
# end of Length-preserving ciphers and modes

#
# AEAD (authenticated encryption with associated data) ciphers
#
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_ECHAINIV=m
CONFIG_CRYPTO_ESSIV=m
# end of AEAD (authenticated encryption with associated data) ciphers

#
# Hashes, digests, and MACs
#
CONFIG_CRYPTO_BLAKE2B=m
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_GHASH=y
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
# CONFIG_CRYPTO_POLY1305 is not set
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_SHA3=m
# CONFIG_CRYPTO_SM3_GENERIC is not set
# CONFIG_CRYPTO_STREEBOG is not set
CONFIG_CRYPTO_VMAC=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_XXHASH=m
# end of Hashes, digests, and MACs

#
# CRCs (cyclic redundancy checks)
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRC64_ROCKSOFT=m
# end of CRCs (cyclic redundancy checks)

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
# CONFIG_CRYPTO_ZSTD is not set
# end of Compression

#
# Random number generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_DRBG_MENU=y
CONFIG_CRYPTO_DRBG_HMAC=y
CONFIG_CRYPTO_DRBG_HASH=y
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=y
CONFIG_CRYPTO_JITTERENTROPY=y
# end of Random number generation

#
# Userspace interface
#
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_USER_API_RNG=y
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=y
CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y
# CONFIG_CRYPTO_STATS is not set
# end of Userspace interface

CONFIG_CRYPTO_HASH_INFO=y

#
# Accelerated Cryptographic Algorithms for CPU (x86)
#
# CONFIG_CRYPTO_CURVE25519_X86 is not set
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
# CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64 is not set
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m
# CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64 is not set
CONFIG_CRYPTO_CHACHA20_X86_64=m
# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
# CONFIG_CRYPTO_BLAKE2S_X86 is not set
# CONFIG_CRYPTO_POLYVAL_CLMUL_NI is not set
# CONFIG_CRYPTO_POLY1305_X86_64 is not set
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=m
# CONFIG_CRYPTO_SM3_AVX_X86_64 is not set
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
# end of Accelerated Cryptographic Algorithms for CPU (x86)

CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=m
CONFIG_CRYPTO_DEV_PADLOCK_AES=m
CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=y
CONFIG_CRYPTO_DEV_CCP_DD=m
CONFIG_CRYPTO_DEV_SP_CCP=y
CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
CONFIG_CRYPTO_DEV_SP_PSP=y
# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
CONFIG_CRYPTO_DEV_QAT=m
CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
CONFIG_CRYPTO_DEV_QAT_C3XXX=m
CONFIG_CRYPTO_DEV_QAT_C62X=m
# CONFIG_CRYPTO_DEV_QAT_4XXX is not set
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
CONFIG_CRYPTO_DEV_QAT_C3XXXVF=m
CONFIG_CRYPTO_DEV_QAT_C62XVF=m
CONFIG_CRYPTO_DEV_NITROX=m
CONFIG_CRYPTO_DEV_NITROX_CNN55XX=m
# CONFIG_CRYPTO_DEV_VIRTIO is not set
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y
# CONFIG_FIPS_SIGNATURE_SELFTEST is not set

#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
CONFIG_MODULE_SIG_KEY_TYPE_RSA=y
# CONFIG_MODULE_SIG_KEY_TYPE_ECDSA is not set
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_SYSTEM_BLACKLIST_HASH_LIST=""
# CONFIG_SYSTEM_REVOCATION_LIST is not set
# CONFIG_SYSTEM_BLACKLIST_AUTH_UPDATE is not set
# end of Certificates for signature checking

CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_RAID6_PQ_BENCHMARK=y
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_CORDIC=m
# CONFIG_PRIME_NUMBERS is not set
CONFIG_RATIONAL=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
CONFIG_ARCH_USE_SYM_ANNOTATIONS=y

#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_UTILS=y
CONFIG_CRYPTO_LIB_AES=y
CONFIG_CRYPTO_LIB_ARC4=m
CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y
CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=m
CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m
# CONFIG_CRYPTO_LIB_CHACHA is not set
# CONFIG_CRYPTO_LIB_CURVE25519 is not set
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
# CONFIG_CRYPTO_LIB_POLY1305 is not set
# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_LIB_SHA1=y
CONFIG_CRYPTO_LIB_SHA256=y
# end of Crypto library routines

CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC64_ROCKSOFT=m
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
CONFIG_CRC64=m
# CONFIG_CRC4 is not set
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_ZSTD_COMMON=y
CONFIG_ZSTD_COMPRESS=m
CONFIG_ZSTD_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
# CONFIG_XZ_DEC_MICROLZMA is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_DECOMPRESS_ZSTD=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC8=y
CONFIG_REED_SOLOMON_DEC8=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_INTERVAL_TREE=y
CONFIG_XARRAY_MULTI=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_DMA_OPS=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y
CONFIG_SWIOTLB=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_DMA_MAP_BENCHMARK is not set
CONFIG_SGL_ALLOC=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPUMASK_OFFSTACK=y
# CONFIG_FORCE_NR_CPUS is not set
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_CLZ_TAB=y
CONFIG_IRQ_POLL=y
CONFIG_MPILIB=y
CONFIG_SIGNATURE=y
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_MEMREGION=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_HAS_COPY_MC=y
CONFIG_ARCH_STACKWALK=y
CONFIG_STACKDEPOT=y
CONFIG_STACKDEPOT_ALWAYS_INIT=y
CONFIG_SBITMAP=y
# end of Library routines

CONFIG_ASN1_ENCODER=y

#
# Kernel hacking
#

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_PRINTK_CALLER=y
# CONFIG_STACKTRACE_BUILD_ID is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options

CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MISC=y

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_AS_HAS_NON_CONST_LEB128=y
# CONFIG_DEBUG_INFO_NONE is not set
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
CONFIG_DEBUG_INFO_DWARF4=y
# CONFIG_DEBUG_INFO_DWARF5 is not set
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_DEBUG_INFO_COMPRESSED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_PAHOLE_HAS_SPLIT_BTF=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_FRAME_WARN=8192
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
CONFIG_OBJTOOL=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options

#
# Generic Kernel Debugging Instruments
#
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_MAGIC_SYSRQ_SERIAL=y
CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
CONFIG_UBSAN=y
# CONFIG_UBSAN_TRAP is not set
CONFIG_CC_HAS_UBSAN_BOUNDS=y
CONFIG_UBSAN_BOUNDS=y
CONFIG_UBSAN_ONLY_BOUNDS=y
CONFIG_UBSAN_SHIFT=y
# CONFIG_UBSAN_DIV_ZERO is not set
# CONFIG_UBSAN_BOOL is not set
# CONFIG_UBSAN_ENUM is not set
# CONFIG_UBSAN_ALIGNMENT is not set
CONFIG_UBSAN_SANITIZE_ALL=y
# CONFIG_TEST_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
CONFIG_HAVE_KCSAN_COMPILER=y
# end of Generic Kernel Debugging Instruments

#
# Networking Debugging
#
# CONFIG_NET_DEV_REFCNT_TRACKER is not set
# CONFIG_NET_NS_REFCNT_TRACKER is not set
# CONFIG_DEBUG_NET is not set
# end of Networking Debugging

#
# Memory Debugging
#
CONFIG_PAGE_EXTENSION=y
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_SLUB_DEBUG=y
# CONFIG_SLUB_DEBUG_ON is not set
CONFIG_PAGE_OWNER=y
# CONFIG_PAGE_TABLE_CHECK is not set
# CONFIG_PAGE_POISONING is not set
# CONFIG_DEBUG_PAGE_REF is not set
# CONFIG_DEBUG_RODATA_TEST is not set
CONFIG_ARCH_HAS_DEBUG_WX=y
# CONFIG_DEBUG_WX is not set
CONFIG_GENERIC_PTDUMP=y
# CONFIG_PTDUMP_DEBUGFS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SHRINKER_DEBUG is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_SCHED_STACK_END_CHECK is not set
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VM_PGTABLE is not set
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
CONFIG_KASAN=y
CONFIG_KASAN_GENERIC=y
# CONFIG_KASAN_OUTLINE is not set
CONFIG_KASAN_INLINE=y
CONFIG_KASAN_STACK=y
CONFIG_KASAN_VMALLOC=y
# CONFIG_KASAN_MODULE_TEST is not set
CONFIG_HAVE_ARCH_KFENCE=y
# CONFIG_KFENCE is not set
CONFIG_HAVE_ARCH_KMSAN=y
# end of Memory Debugging

CONFIG_DEBUG_SHIRQ=y

#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=480
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_WQ_WATCHDOG=y
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs

#
# Scheduler Debugging
#
CONFIG_SCHED_DEBUG=y
CONFIG_SCHED_INFO=y
CONFIG_SCHEDSTATS=y
# end of Scheduler Debugging

# CONFIG_DEBUG_TIMEKEEPING is not set

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)

# CONFIG_DEBUG_IRQFLAGS is not set
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set

#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
# CONFIG_DEBUG_PLIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
CONFIG_BUG_ON_DATA_CORRUPTION=y
# CONFIG_DEBUG_MAPLE_TREE is not set
# end of Debug kernel data structures

# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
# CONFIG_RCU_SCALE_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging

# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
CONFIG_LATENCYTOP=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_RETHOOK=y
CONFIG_RETHOOK=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_NO_PATCHABLE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_OBJTOOL_MCOUNT=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_HAVE_BUILDTIME_MCOUNT_SORT=y
CONFIG_BUILDTIME_MCOUNT_SORT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_BOOTTIME_TRACING is not set
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y
# CONFIG_FPROBE is not set
CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y
# CONFIG_IRQSOFF_TRACER is not set
CONFIG_SCHED_TRACER=y
CONFIG_HWLAT_TRACER=y
# CONFIG_OSNOISE_TRACER is not set
# CONFIG_TIMERLAT_TRACER is not set
# CONFIG_MMIOTRACE is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_BLK_DEV_IO_TRACE is not set
CONFIG_KPROBE_EVENTS=y
# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
CONFIG_UPROBE_EVENTS=y
CONFIG_BPF_EVENTS=y
CONFIG_DYNAMIC_EVENTS=y
CONFIG_PROBE_EVENTS=y
CONFIG_BPF_KPROBE_OVERRIDE=y
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_FTRACE_MCOUNT_USE_CC=y
CONFIG_TRACING_MAP=y
CONFIG_SYNTH_EVENTS=y
CONFIG_HIST_TRIGGERS=y
# CONFIG_TRACE_EVENT_INJECT is not set
# CONFIG_TRACEPOINT_BENCHMARK is not set
CONFIG_RING_BUFFER_BENCHMARK=m
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_RECORD_RECURSION is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_FTRACE_SORT_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set
# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
# CONFIG_SYNTH_EVENT_GEN_TEST is not set
# CONFIG_KPROBE_EVENT_GEN_TEST is not set
# CONFIG_HIST_TRIGGERS_DEBUG is not set
# CONFIG_RV is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_SAMPLES is not set
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
# CONFIG_IO_STRICT_DEVMEM is not set

#
# x86 Debugging
#
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_EARLY_PRINTK_USB_XDBC=y
# CONFIG_EFI_PGT_DUMP is not set
# CONFIG_DEBUG_TLBFLUSH is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_FPU is not set
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=y
# CONFIG_UNWINDER_FRAME_POINTER is not set
# end of x86 Debugging

#
# Kernel Testing and Coverage
#
# CONFIG_KUNIT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FUNCTION_ERROR_INJECTION=y
# CONFIG_FAULT_INJECTION is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
# CONFIG_LKDTM is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_DIV64 is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_TEST_REF_TRACKER is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_STRING_SELFTEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_STRSCPY is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_SCANF is not set
# CONFIG_TEST_BITMAP is not set
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_MAPLE_TREE is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_SIPHASH is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
# CONFIG_TEST_BITOPS is not set
# CONFIG_TEST_VMALLOC is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_TEST_BPF is not set
# CONFIG_TEST_BLACKHOLE_DEV is not set
# CONFIG_FIND_BIT_BENCHMARK is not set
# CONFIG_TEST_FIRMWARE is not set
# CONFIG_TEST_SYSCTL is not set
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_DYNAMIC_DEBUG is not set
# CONFIG_TEST_KMOD is not set
# CONFIG_TEST_MEMCAT_P is not set
# CONFIG_TEST_LIVEPATCH is not set
# CONFIG_TEST_MEMINIT is not set
# CONFIG_TEST_HMM is not set
# CONFIG_TEST_FREE_PAGES is not set
# CONFIG_TEST_FPU is not set
# CONFIG_TEST_CLOCKSOURCE_WATCHDOG is not set
CONFIG_ARCH_USE_MEMTEST=y
# CONFIG_MEMTEST is not set
# end of Kernel Testing and Coverage

#
# Rust hacking
#
# end of Rust hacking
# end of Kernel hacking

[-- Attachment #3: job-script --]
[-- Type: text/plain, Size: 6166 bytes --]

#!/bin/sh

export_top_env()
{
	export suite='kvm-unit-tests-qemu'
	export testcase='kvm-unit-tests-qemu'
	export category='functional'
	export timeout='35m'
	export qemu_branch='qemu/master'
	export qemu_commit='222059a0fccf4af3be776fe35a5ea2d6a68f9a0b'
	export qemu_config='x86_64-softmmu'
	export job_origin='kvm-unit-tests-qemu.yaml'
	export queue_cmdline_keys='branch
commit
kbuild_queue_analysis'
	export queue='validate'
	export testbox='lkp-icl-2sp4'
	export tbox_group='lkp-icl-2sp4'
	export submit_id='63c4488115ede5e9d65c966d'
	export job_file='/lkp/jobs/scheduled/lkp-icl-2sp4/kvm-unit-tests-qemu-defaults-debian-11.1-x86_64-20220510.cgz-99e2853d906a7593e6a3f0e5bc7ecc503b6b9462-20230116-59862-vf7f1y-2.yaml'
	export id='65305468bc5b516b767c01e69afdc6a2acce4421'
	export queuer_version='/zday/lkp'
	export model='Ice Lake'
	export nr_node=2
	export nr_cpu=128
	export memory='128G'
	export nr_ssd_partitions=3
	export nr_hdd_partitions=6
	export hdd_partitions='/dev/disk/by-id/ata-WDC_WD20SPZX-08UA7_WD-WXE2EA0ECVAS-part*'
	export ssd_partitions='/dev/disk/by-id/ata-INTEL_SSDSC2BA800G3_BTTV34510181800JGN-part*'
	export rootfs_partition='/dev/disk/by-id/ata-INTEL_SSDSC2BB240G4_CVWL422602EB240NGN-part1'
	export kernel_cmdline_hw='acpi_rsdp=0x69ffd014'
	export brand='Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz'
	export need_kconfig=\{\"KVM\"\=\>\"m\"\}'
'\{\"KVM_INTEL\"\=\>\"m\"\}'
'\{\"X86_INTEL_TSX_MODE_OFF\"\=\>\"n\"\}'
'\{\"X86_INTEL_TSX_MODE_AUTO\"\=\>\"y\"\}'
'\{\"X86_INTEL_TSX_MODE_ON\"\=\>\"n\"\}
	export commit='99e2853d906a7593e6a3f0e5bc7ecc503b6b9462'
	export ucode='0xd000363'
	export bisect_dmesg=true
	export kconfig='x86_64-rhel-8.3-kvm'
	export enqueue_time='2023-01-16 02:40:01 +0800'
	export _id='63c4488115ede5e9d65c966d'
	export _rt='/result/kvm-unit-tests-qemu/defaults/lkp-icl-2sp4/debian-11.1-x86_64-20220510.cgz/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/x86_64-softmmu/222059a0fccf4af3be776fe35a5ea2d6a68f9a0b'
	export user='lkp'
	export compiler='gcc-11'
	export LKP_SERVER='internal-lkp-server'
	export head_commit='21041184c4351d783bba9e9d3716ed6317b8e808'
	export base_commit='88603b6dc419445847923fcb7fe5080067a30f98'
	export branch='linux-review/Vipin-Sharma/NUMA-aware-page-table-s-pages-allocation/20221222-104911'
	export rootfs='debian-11.1-x86_64-20220510.cgz'
	export result_root='/result/kvm-unit-tests-qemu/defaults/lkp-icl-2sp4/debian-11.1-x86_64-20220510.cgz/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/x86_64-softmmu/222059a0fccf4af3be776fe35a5ea2d6a68f9a0b/2'
	export scheduler_version='/lkp/lkp/src'
	export arch='x86_64'
	export max_uptime=2100
	export initrd='/osimage/debian/debian-11.1-x86_64-20220510.cgz'
	export bootloader_append='root=/dev/ram0
RESULT_ROOT=/result/kvm-unit-tests-qemu/defaults/lkp-icl-2sp4/debian-11.1-x86_64-20220510.cgz/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/x86_64-softmmu/222059a0fccf4af3be776fe35a5ea2d6a68f9a0b/2
BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/vmlinuz-6.1.0-rc8-00451-g99e2853d906a
branch=linux-review/Vipin-Sharma/NUMA-aware-page-table-s-pages-allocation/20221222-104911
job=/lkp/jobs/scheduled/lkp-icl-2sp4/kvm-unit-tests-qemu-defaults-debian-11.1-x86_64-20220510.cgz-99e2853d906a7593e6a3f0e5bc7ecc503b6b9462-20230116-59862-vf7f1y-2.yaml
user=lkp
ARCH=x86_64
kconfig=x86_64-rhel-8.3-kvm
commit=99e2853d906a7593e6a3f0e5bc7ecc503b6b9462
initcall_debug
nmi_watchdog=0
acpi_rsdp=0x69ffd014
max_uptime=2100
LKP_SERVER=internal-lkp-server
nokaslr
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw'
	export modules_initrd='/pkg/linux/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/modules.cgz'
	export bm_initrd='/osimage/deps/debian-11.1-x86_64-20220510.cgz/run-ipconfig_20220515.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/lkp_20220513.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/rsync-rootfs_20220515.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/kvm-unit-tests-qemu_20220726.cgz,/osimage/pkg/debian-11.1-x86_64-20220510.cgz/kvm-unit-tests-x86_64-e11a0e2-1_20230106.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/hw_20220526.cgz'
	export ucode_initrd='/osimage/ucode/intel-ucode-20220804.cgz'
	export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz'
	export site='inn'
	export LKP_CGI_PORT=80
	export LKP_CIFS_PORT=139
	export last_kernel='6.1.0-rc8-00459-gf50f1392490f'
	export repeat_to=6
	export stop_repeat_if_found='dmesg.BUG:sleeping_function_called_from_invalid_context_at_include/linux/sched/mm.h'
	export kbuild_queue_analysis=1
	export schedule_notify_address=
	export kernel='/pkg/linux/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/vmlinuz-6.1.0-rc8-00451-g99e2853d906a'
	export dequeue_time='2023-01-16 02:55:41 +0800'
	export job_initrd='/lkp/jobs/scheduled/lkp-icl-2sp4/kvm-unit-tests-qemu-defaults-debian-11.1-x86_64-20220510.cgz-99e2853d906a7593e6a3f0e5bc7ecc503b6b9462-20230116-59862-vf7f1y-2.cgz'

	[ -n "$LKP_SRC" ] ||
	export LKP_SRC=/lkp/${user:-lkp}/src
}

run_job()
{
	echo $$ > $TMP/run-job.pid

	. $LKP_SRC/lib/http.sh
	. $LKP_SRC/lib/job.sh
	. $LKP_SRC/lib/env.sh

	export_top_env

	run_monitor $LKP_SRC/monitors/wrapper kmsg
	run_monitor $LKP_SRC/monitors/wrapper heartbeat
	run_monitor $LKP_SRC/monitors/wrapper meminfo
	run_monitor $LKP_SRC/monitors/wrapper oom-killer
	run_monitor $LKP_SRC/monitors/plain/watchdog

	run_test $LKP_SRC/tests/wrapper kvm-unit-tests-qemu
}

extract_stats()
{
	export stats_part_begin=
	export stats_part_end=

	$LKP_SRC/stats/wrapper kvm-unit-tests-qemu
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper meminfo

	$LKP_SRC/stats/wrapper time kvm-unit-tests-qemu.time
	$LKP_SRC/stats/wrapper dmesg
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper last_state
	$LKP_SRC/stats/wrapper stderr
	$LKP_SRC/stats/wrapper time
}

"$@"

[-- Attachment #4: dmesg.xz --]
[-- Type: application/x-xz, Size: 122212 bytes --]

[-- Attachment #5: kvm-unit-tests-qemu --]
[-- Type: text/plain, Size: 231983 bytes --]

timeout 60m git clone -q git://gitmirror/qemu /lkp/benchmarks/qemu
2023-01-15 18:57:55 git checkout -q 222059a0fccf4af3be776fe35a5ea2d6a68f9a0b
2023-01-15 18:58:13 ./configure --target-list=x86_64-softmmu
Using './build' as the directory for build output
The Meson build system
Version: 0.61.5
Source dir: /lkp/benchmarks/qemu
Build dir: /lkp/benchmarks/qemu/build
Build type: native build
Project name: qemu
Project version: 7.2.50
C compiler for the host machine: cc -m64 -mcx16 (gcc 10.2.1 "cc (Debian 10.2.1-6) 10.2.1 20210110")
C linker for the host machine: cc -m64 -mcx16 ld.bfd 2.35.2
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program scripts/symlink-install-tree.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/scripts/symlink-install-tree.py)
Program sh found: YES (/usr/bin/sh)
Program python3 found: YES (/usr/bin/python3)
Program bzip2 found: YES (/bin/bzip2)
Program iasl found: NO
Compiler for C supports link arguments -Wl,-z,relro: YES 
Compiler for C supports link arguments -Wl,-z,now: YES 
C++ compiler for the host machine: c++ -m64 -mcx16 (gcc 10.2.1 "c++ (Debian 10.2.1-6) 10.2.1 20210110")
C++ linker for the host machine: c++ -m64 -mcx16 ld.bfd 2.35.2
Compiler for C++ supports link arguments -Wl,--warn-common: YES 
Program cgcc found: NO
Library m found: YES
Run-time dependency threads found: YES
Library util found: YES
Run-time dependency appleframeworks found: NO (tried framework)
Found pkg-config: /usr/bin/pkg-config (0.29.2)
Run-time dependency gio-2.0 found: YES 2.66.8
Program /usr/bin/gdbus-codegen found: YES (/usr/bin/gdbus-codegen)
Run-time dependency gio-unix-2.0 found: YES 2.66.8
Run-time dependency pixman-1 found: YES 0.40.0
Run-time dependency zlib found: YES 1.2.11
Has header "libaio.h" : NO 
Run-time dependency liburing found: NO (tried pkgconfig)
Run-time dependency libnfs found: NO (tried pkgconfig)
Run-time dependency appleframeworks found: NO (tried framework)
Run-time dependency appleframeworks found: NO (tried framework)
Run-time dependency libseccomp found: NO (tried pkgconfig)
Has header "cap-ng.h" : NO 
Run-time dependency xkbcommon found: NO (tried pkgconfig)
Run-time dependency slirp found: NO (tried pkgconfig)
Has header "libvdeplug.h" : NO 
Run-time dependency libpulse found: NO (tried pkgconfig)
Run-time dependency alsa found: NO (tried pkgconfig)
Run-time dependency jack found: NO (tried pkgconfig)
Run-time dependency sndio found: NO (tried pkgconfig)
Run-time dependency spice-protocol found: NO (tried pkgconfig)
Run-time dependency spice-server found: NO (tried pkgconfig)
Library rt found: YES
Run-time dependency libiscsi found: NO (tried pkgconfig)
Run-time dependency libzstd found: NO (tried pkgconfig)
Run-time dependency virglrenderer found: NO (tried pkgconfig)
Run-time dependency blkio found: NO (tried pkgconfig)
Run-time dependency libcurl found: NO (tried pkgconfig)
Run-time dependency libudev found: NO (tried pkgconfig)
Library mpathpersist found: NO
Run-time dependency ncursesw found: NO (tried pkgconfig)
Has header "curses.h" : NO 
Message: Trying with /usr/include/ncursesw
Has header "curses.h" : NO 
Has header "brlapi.h" : NO 
sdl2-config found: NO
Run-time dependency sdl2 found: NO (tried pkgconfig and config-tool)
Library rados found: NO
Has header "rbd/librbd.h" : NO 
Run-time dependency glusterfs-api found: NO (tried pkgconfig)
Run-time dependency libssh found: NO (tried pkgconfig)
Has header "bzlib.h" : NO 
Has header "lzfse.h" : NO 
Has header "sys/soundcard.h" : YES 
Run-time dependency epoxy found: NO (tried pkgconfig)
Has header "epoxy/egl.h" with dependency epoxy: NO 
Run-time dependency gnutls found: NO (tried pkgconfig)
Run-time dependency gnutls found: NO (tried pkgconfig)
libgcrypt-config found: NO need ['>=1.8']
Run-time dependency libgcrypt found: NO (tried config-tool)
Run-time dependency nettle found: NO (tried pkgconfig)
Run-time dependency gmp found: NO (tried pkgconfig)
Run-time dependency gtk+-3.0 found: NO (tried pkgconfig)
Run-time dependency libpng found: NO (tried pkgconfig)
Run-time dependency libjpeg found: NO (tried pkgconfig)
Has header "sasl/sasl.h" : NO 
Has header "security/pam_appl.h" : NO 
Has header "snappy-c.h" : NO 
Has header "lzo/lzo1x.h" : NO 
Has header "numa.h" : NO 
Library ibumad found: NO
Has header "rdma/rdma_cma.h" : NO 
Library ibverbs found: NO
Run-time dependency xencontrol found: NO (tried pkgconfig)
Library xenstore found: NO
Library xenctrl found: NO
Library xendevicemodel found: NO
Library xenforeignmemory found: NO
Library xengnttab found: NO
Library xenevtchn found: NO
Library xentoolcore found: NO
Run-time dependency libcacard found: NO (tried pkgconfig)
Run-time dependency u2f-emu found: NO (tried pkgconfig)
Run-time dependency canokey-qemu found: NO (tried pkgconfig)
Run-time dependency libusbredirparser-0.5 found: NO (tried pkgconfig)
Run-time dependency libusb-1.0 found: NO (tried pkgconfig)
Run-time dependency libpmem found: NO (tried pkgconfig)
Run-time dependency libdaxctl found: NO (tried pkgconfig)
Run-time dependency libkeyutils found: NO (tried pkgconfig)
Checking for function "gettid" : YES 
Run-time dependency libselinux found: YES 3.1
Run-time dependency fuse3 found: NO (tried pkgconfig)
Run-time dependency libbpf found: NO (tried pkgconfig)
Has header "sys/epoll.h" : YES 
Has header "linux/magic.h" : YES 
Has header "valgrind/valgrind.h" : NO 
Has header "linux/btrfs.h" : YES 
Has header "libdrm/drm.h" : NO 
Has header "pty.h" : YES 
Has header "sys/disk.h" : NO 
Has header "sys/ioccom.h" : NO 
Has header "sys/kcov.h" : NO 
Checking for function "close_range" : NO 
Checking for function "accept4" : YES 
Checking for function "clock_adjtime" : YES 
Checking for function "dup3" : YES 
Checking for function "fallocate" : YES 
Checking for function "posix_fallocate" : YES 
Checking for function "posix_memalign" : YES 
Checking for function "_aligned_malloc" : NO 
Checking for function "valloc" : YES 
Checking for function "memalign" : YES 
Checking for function "ppoll" : YES 
Checking for function "preadv" : YES 
Checking for function "pthread_fchdir_np" : NO 
Checking for function "sendfile" : YES 
Checking for function "setns" : YES 
Checking for function "unshare" : YES 
Checking for function "syncfs" : YES 
Checking for function "sync_file_range" : YES 
Checking for function "timerfd_create" : YES 
Checking for function "copy_file_range" : YES 
Checking for function "getifaddrs" : YES 
Checking for function "openpty" with dependency -lutil: YES 
Checking for function "strchrnul" : YES 
Checking for function "system" : YES 
Header <byteswap.h> has symbol "bswap_32" : YES 
Header <sys/epoll.h> has symbol "epoll_create1" : YES 
Header <linux/falloc.h> has symbol "FALLOC_FL_PUNCH_HOLE" : YES 
Header <linux/falloc.h> has symbol "FALLOC_FL_KEEP_SIZE" : YES 
Header <linux/falloc.h> has symbol "FALLOC_FL_ZERO_RANGE" : YES 
Has header "linux/fiemap.h" : YES 
Header <linux/fs.h> has symbol "FS_IOC_FIEMAP" : YES 
Checking for function "getrandom" : YES 
Header <sys/random.h> has symbol "GRND_NONBLOCK" : YES 
Header <sys/inotify.h> has symbol "inotify_init" : YES 
Header <sys/inotify.h> has symbol "inotify_init1" : YES 
Header <machine/bswap.h> has symbol "bswap32" : NO 
Header <sys/prctl.h> has symbol "PR_SET_TIMERSLACK" : YES 
Header <linux/rtnetlink.h> has symbol "IFLA_PROTO_DOWN" : YES 
Header <sys/sysmacros.h> has symbol "makedev" : YES 
Header <getopt.h> has symbol "optreset" : NO 
Header <netinet/in.h> has symbol "IPPROTO_MPTCP" : NO 
Header <sys/mount.h> has symbol "FSCONFIG_SET_FLAG" : NO 
Checking whether type "struct sigevent" has member "sigev_notify_thread_id" : NO 
Checking whether type "struct stat" has member "st_atim" : YES 
Checking for type "struct iovec" : YES 
Checking for type "struct utmpx" : YES 
Checking for type "struct mmsghdr" : YES 
Header <linux/vm_sockets.h> has symbol "AF_VSOCK" : YES 
Program scripts/minikconf.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/scripts/minikconf.py)
Configuring x86_64-softmmu-config-target.h using configuration
Configuring x86_64-softmmu-config-devices.mak with command
Reading depfile: /lkp/benchmarks/qemu/build/meson-private/x86_64-softmmu-config-devices.mak.d
Configuring x86_64-softmmu-config-devices.h using configuration
Program scripts/make-config-poison.sh found: YES (/lkp/benchmarks/qemu/scripts/make-config-poison.sh)
Run-time dependency capstone found: NO (tried pkgconfig)
Library fdt found: YES
Configuring config-host.h using configuration
Program scripts/hxtool found: YES (/lkp/benchmarks/qemu/scripts/hxtool)
Program scripts/shaderinclude.pl found: YES (/usr/bin/env perl /lkp/benchmarks/qemu/scripts/shaderinclude.pl)
Program scripts/qapi-gen.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/scripts/qapi-gen.py)
Program scripts/qemu-version.sh found: YES (/lkp/benchmarks/qemu/scripts/qemu-version.sh)

Executing subproject libvhost-user 

libvhost-user| Project name: libvhost-user
libvhost-user| Project version: undefined
libvhost-user| C compiler for the host machine: cc -m64 -mcx16 (gcc 10.2.1 "cc (Debian 10.2.1-6) 10.2.1 20210110")
libvhost-user| C linker for the host machine: cc -m64 -mcx16 ld.bfd 2.35.2
libvhost-user| Dependency threads found: YES unknown (cached)
libvhost-user| Dependency glib-2.0 found: YES 2.66.8 (overridden)
libvhost-user| Build targets in project: 9
libvhost-user| Subproject libvhost-user finished.


Executing subproject libvduse 

libvduse| Project name: libvduse
libvduse| Project version: undefined
libvduse| C compiler for the host machine: cc -m64 -mcx16 (gcc 10.2.1 "cc (Debian 10.2.1-6) 10.2.1 20210110")
libvduse| C linker for the host machine: cc -m64 -mcx16 ld.bfd 2.35.2
libvduse| Build targets in project: 10
libvduse| Subproject libvduse finished.

Program scripts/decodetree.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/scripts/decodetree.py)
Program ../scripts/modules/module_block.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/block/../scripts/modules/module_block.py)
Program ../scripts/block-coroutine-wrapper.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/block/../scripts/block-coroutine-wrapper.py)
Program scripts/modinfo-collect.py found: YES (/lkp/benchmarks/qemu/scripts/modinfo-collect.py)
Program scripts/modinfo-generate.py found: YES (/lkp/benchmarks/qemu/scripts/modinfo-generate.py)
Program nm found: YES
Program scripts/undefsym.py found: YES (/usr/bin/python3 /lkp/benchmarks/qemu/scripts/undefsym.py)
Program scripts/feature_to_c.sh found: YES (/bin/sh /lkp/benchmarks/qemu/scripts/feature_to_c.sh)
Configuring 50-edk2-i386-secure.json using configuration
Configuring 50-edk2-x86_64-secure.json using configuration
Configuring 60-edk2-aarch64.json using configuration
Configuring 60-edk2-arm.json using configuration
Configuring 60-edk2-i386.json using configuration
Configuring 60-edk2-x86_64.json using configuration
Program qemu-keymap found: NO
Program sphinx-build-3 sphinx-build found: NO
Program bash found: YES 5.1.4 (/usr/bin/bash)
Program diff found: YES (/usr/bin/diff)
Program dbus-daemon found: YES (/usr/bin/dbus-daemon)
Did not find CMake 'cmake'
Found CMake: NO
Run-time dependency gvnc-1.0 found: NO (tried pkgconfig and cmake)
Program initrd-stress.sh found: YES (/lkp/benchmarks/qemu/tests/migration/initrd-stress.sh)
Build targets in project: 515

qemu 7.2.50

  Directories
    Install prefix               : /usr/local
    BIOS directory               : share/qemu
    firmware path                : share/qemu-firmware
    binary directory             : /usr/local/bin
    library directory            : /usr/local/lib/x86_64-linux-gnu
    module directory             : lib/x86_64-linux-gnu/qemu
    libexec directory            : /usr/local/libexec
    include directory            : /usr/local/include
    config directory             : /usr/local/etc
    local state directory        : /var/local
    Manual directory             : /usr/local/share/man
    Doc directory                : /usr/local/share/doc
    Build directory              : /lkp/benchmarks/qemu/build
    Source path                  : /lkp/benchmarks/qemu
    GIT submodules               : ui/keycodemapdb meson tests/fp/berkeley-testfloat-3 tests/fp/berkeley-softfloat-3 dtc

  Host binaries
    git                          : git
    make                         : make
    python                       : /usr/bin/python3 (version: 3.9)
    sphinx-build                 : NO
    iasl                         : NO
    genisoimage                  : 

  Configurable features
    Documentation                : NO
    system-mode emulation        : YES
    user-mode emulation          : NO
    block layer                  : YES
    Install blobs                : YES
    module support               : NO
    fuzzing support              : NO
    Audio drivers                : oss
    Trace backends               : log
    D-Bus display                : NO
    QOM debugging                : NO
    vhost-kernel support         : YES
    vhost-net support            : YES
    vhost-user support           : YES
    vhost-user-crypto support    : YES
    vhost-user-blk server support: YES
    vhost-vdpa support           : YES
    build guest agent            : YES

  Compilation
    host CPU                     : x86_64
    host endianness              : little
    C compiler                   : cc -m64 -mcx16
    Host C compiler              : cc -m64 -mcx16
    C++ compiler                 : c++ -m64 -mcx16
    CFLAGS                       : -O2 -g
    CXXFLAGS                     : -O2 -g
    QEMU_CFLAGS                  : -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong
    QEMU_CXXFLAGS                : -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wundef -Wwrite-strings -fno-strict-aliasing -fno-common -fwrapv -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong
    QEMU_OBJCFLAGS               : 
    QEMU_LDFLAGS                 : -fstack-protector-strong -Wl,-z,relro -Wl,-z,now -Wl,--warn-common
    profiler                     : NO
    link-time optimization (LTO) : NO
    PIE                          : YES
    static build                 : NO
    malloc trim support          : YES
    membarrier                   : NO
    debug stack usage            : NO
    mutex debugging              : NO
    memory allocator             : system
    avx2 optimization            : YES
    avx512f optimization         : NO
    gprof enabled                : NO
    gcov                         : NO
    thread sanitizer             : NO
    CFI support                  : NO
    strip binaries               : NO
    sparse                       : NO
    mingw32 support              : NO

  Cross compilers
    x86_64                       : cc

  Targets and accelerators
    KVM support                  : YES
    HAX support                  : NO
    HVF support                  : NO
    WHPX support                 : NO
    NVMM support                 : NO
    Xen support                  : NO
    TCG support                  : YES
    TCG backend                  : native (x86_64)
    TCG plugins                  : YES
    TCG debug enabled            : NO
    target list                  : x86_64-softmmu
    default devices              : YES
    out of process emulation     : YES
    vfio-user server             : NO

  Block layer support
    coroutine backend            : ucontext
    coroutine pool               : YES
    Block whitelist (rw)         : 
    Block whitelist (ro)         : 
    Use block whitelist in tools : NO
    VirtFS support               : NO
    build virtiofs daemon        : NO
    Live block migration         : YES
    replication support          : YES
    bochs support                : YES
    cloop support                : YES
    dmg support                  : YES
    qcow v1 support              : YES
    vdi support                  : YES
    vvfat support                : YES
    qed support                  : YES
    parallels support            : YES
    FUSE exports                 : NO
    VDUSE block exports          : YES

  Crypto
    TLS priority                 : NORMAL
    GNUTLS support               : NO
    libgcrypt                    : NO
    nettle                       : NO
    AF_ALG support               : NO
    rng-none                     : NO
    Linux keyring                : YES

  Dependencies
    SDL support                  : NO
    SDL image support            : NO
    GTK support                  : NO
    pixman                       : YES 0.40.0
    VTE support                  : NO
    slirp support                : NO
    libtasn1                     : NO
    PAM                          : NO
    iconv support                : YES
    curses support               : NO
    virgl support                : NO
    blkio support                : NO
    curl support                 : NO
    Multipath support            : NO
    PNG support                  : NO
    VNC support                  : YES
    VNC SASL support             : NO
    VNC JPEG support             : NO
    OSS support                  : YES
    sndio support                : NO
    ALSA support                 : NO
    PulseAudio support           : NO
    JACK support                 : NO
    brlapi support               : NO
    vde support                  : NO
    netmap support               : NO
    l2tpv3 support               : YES
    Linux AIO support            : NO
    Linux io_uring support       : NO
    ATTR/XATTR support           : YES
    RDMA support                 : NO
    PVRDMA support               : NO
    fdt support                  : system
    libcap-ng support            : NO
    bpf support                  : NO
    spice protocol support       : NO
    rbd support                  : NO
    smartcard support            : NO
    U2F support                  : NO
    libusb                       : NO
    usb net redir                : NO
    OpenGL support (epoxy)       : NO
    GBM                          : NO
    libiscsi support             : NO
    libnfs support               : NO
    seccomp support              : NO
    GlusterFS support            : NO
    TPM support                  : YES
    libssh support               : NO
    lzo support                  : NO
    snappy support               : NO
    bzip2 support                : NO
    lzfse support                : NO
    zstd support                 : NO
    NUMA host support            : NO
    capstone                     : NO
    libpmem support              : NO
    libdaxctl support            : NO
    libudev                      : NO
    FUSE lseek                   : NO
    selinux                      : YES 3.1

  Subprojects
    libvduse                     : YES
    libvhost-user                : YES

  User defined options
    Native files                 : config-meson.cross
    prefix                       : /usr/local
    werror                       : true
    vfio_user_server             : disabled

Found ninja-1.10.1 at /usr/bin/ninja
Running postconf script '/usr/bin/python3 /lkp/benchmarks/qemu/scripts/symlink-install-tree.py'
2023-01-15 18:58:29 make -j 128
changing dir to build for make ""...
make[1]: Entering directory '/lkp/benchmarks/qemu/build'
  GIT     ui/keycodemapdb meson tests/fp/berkeley-testfloat-3 tests/fp/berkeley-softfloat-3 dtc
/usr/bin/ninja  build.ninja && touch build.ninja.stamp
ninja: no work to do.
/usr/bin/python3 -B /lkp/benchmarks/qemu/meson/meson.py introspect --targets --tests --benchmarks | /usr/bin/python3 -B scripts/mtest2make.py > Makefile.mtest
  GIT     ui/keycodemapdb meson tests/fp/berkeley-testfloat-3 tests/fp/berkeley-softfloat-3 dtc
[1/2552] Compiling C object subprojects/libvhost-user/link-test.p/link-test.c.o
[2/2552] Generating trace/trace-nbd.h with a custom command
[3/2552] Generating trace/trace-nbd.c with a custom command
[4/2552] Generating trace/trace-scsi.h with a custom command
[5/2552] Generating trace/trace-accel_kvm.c with a custom command
[6/2552] Generating trace/trace-audio.h with a custom command
[7/2552] Generating trace/trace-audio.c with a custom command
[8/2552] Generating trace/trace-backends.h with a custom command
[9/2552] Generating trace/trace-backends.c with a custom command
[10/2552] Generating trace/trace-backends_tpm.h with a custom command
[11/2552] Generating trace/trace-backends_tpm.c with a custom command
[12/2552] Generating trace/trace-chardev.h with a custom command
[13/2552] Generating trace/trace-block.c with a custom command
[14/2552] Generating trace/trace-io.h with a custom command
[15/2552] Generating trace/trace-scsi.c with a custom command
[16/2552] Generating trace/trace-accel_kvm.h with a custom command
[17/2552] Generating trace/trace-root.c with a custom command
[18/2552] Generating trace/trace-crypto.h with a custom command
[19/2552] Generating trace/trace-crypto.c with a custom command
[20/2552] Generating trace/trace-qapi.h with a custom command
[21/2552] Generating trace/trace-io.c with a custom command
[22/2552] Generating trace/trace-qapi.c with a custom command
[23/2552] Generating trace/trace-qom.h with a custom command
[24/2552] Generating trace/trace-qom.c with a custom command
[25/2552] Generating trace/trace-monitor.h with a custom command
[26/2552] Generating trace/trace-root.h with a custom command
[27/2552] Generating trace/trace-hw_i2c.h with a custom command
[28/2552] Generating trace/trace-hw_i2c.c with a custom command
[29/2552] Generating trace/trace-hw_i386.h with a custom command
[30/2552] Generating trace/trace-chardev.c with a custom command
[31/2552] Generating trace/trace-monitor.c with a custom command
[32/2552] Generating trace/trace-util.h with a custom command
[33/2552] Generating trace/trace-util.c with a custom command
[34/2552] Generating trace/trace-gdbstub.h with a custom command
[35/2552] Generating trace/trace-gdbstub.c with a custom command
[36/2552] Generating trace/trace-authz.h with a custom command
[37/2552] Generating trace/trace-authz.c with a custom command
[38/2552] Generating trace/trace-block.h with a custom command
[39/2552] Generating trace/trace-hw_i386_xen.c with a custom command
[40/2552] Generating trace/trace-ebpf.h with a custom command
[41/2552] Generating trace/trace-ebpf.c with a custom command
[42/2552] Generating trace/trace-hw_9pfs.h with a custom command
[43/2552] Generating trace/trace-hw_9pfs.c with a custom command
[44/2552] Generating trace/trace-hw_acpi.h with a custom command
[45/2552] Generating trace/trace-hw_acpi.c with a custom command
[46/2552] Generating trace/trace-hw_adc.h with a custom command
[47/2552] Generating trace/trace-hw_adc.c with a custom command
[48/2552] Generating trace/trace-hw_alpha.h with a custom command
[49/2552] Generating trace/trace-hw_alpha.c with a custom command
[50/2552] Generating trace/trace-hw_arm.h with a custom command
[51/2552] Generating trace/trace-hw_arm.c with a custom command
[52/2552] Generating trace/trace-hw_audio.h with a custom command
[53/2552] Generating trace/trace-hw_audio.c with a custom command
[54/2552] Generating trace/trace-hw_block.h with a custom command
[55/2552] Generating trace/trace-hw_block.c with a custom command
[56/2552] Generating trace/trace-hw_block_dataplane.h with a custom command
[57/2552] Generating trace/trace-hw_block_dataplane.c with a custom command
[58/2552] Generating trace/trace-hw_char.h with a custom command
[59/2552] Generating trace/trace-hw_char.c with a custom command
[60/2552] Generating trace/trace-hw_display.c with a custom command
[61/2552] Generating trace/trace-hw_dma.h with a custom command
[62/2552] Generating trace/trace-hw_dma.c with a custom command
[63/2552] Generating trace/trace-hw_hyperv.h with a custom command
[64/2552] Generating trace/trace-hw_hyperv.c with a custom command
[65/2552] Generating trace/trace-hw_i386.c with a custom command
[66/2552] Generating trace/trace-hw_i386_xen.h with a custom command
[67/2552] Generating trace/trace-hw_watchdog.c with a custom command
[68/2552] Compiling C object subprojects/libvhost-user/libvhost-user-glib.a.p/libvhost-user-glib.c.o
[69/2552] Generating trace/trace-hw_display.h with a custom command
[70/2552] Generating trace/trace-hw_ide.h with a custom command
[71/2552] Generating trace/trace-hw_ide.c with a custom command
[72/2552] Generating trace/trace-hw_input.h with a custom command
[73/2552] Generating trace/trace-hw_input.c with a custom command
[74/2552] Generating trace/trace-hw_intc.h with a custom command
[75/2552] Generating trace/trace-hw_intc.c with a custom command
[76/2552] Generating trace/trace-hw_isa.h with a custom command
[77/2552] Generating trace/trace-hw_isa.c with a custom command
[78/2552] Generating trace/trace-hw_mem.h with a custom command
[79/2552] Generating trace/trace-hw_mem.c with a custom command
[80/2552] Generating trace/trace-hw_mips.h with a custom command
[81/2552] Generating trace/trace-hw_mips.c with a custom command
[82/2552] Generating trace/trace-hw_misc.h with a custom command
[83/2552] Generating trace/trace-hw_misc.c with a custom command
[84/2552] Generating trace/trace-hw_misc_macio.h with a custom command
[85/2552] Generating trace/trace-hw_misc_macio.c with a custom command
[86/2552] Generating trace/trace-hw_net.h with a custom command
[87/2552] Generating trace/trace-hw_net.c with a custom command
[88/2552] Generating trace/trace-hw_net_can.h with a custom command
[89/2552] Generating trace/trace-hw_net_can.c with a custom command
[90/2552] Generating trace/trace-hw_nubus.h with a custom command
[91/2552] Generating trace/trace-hw_nubus.c with a custom command
[92/2552] Generating trace/trace-hw_nvme.h with a custom command
[93/2552] Generating trace/trace-hw_nvme.c with a custom command
[94/2552] Generating trace/trace-hw_nvram.h with a custom command
[95/2552] Generating trace/trace-hw_nvram.c with a custom command
[96/2552] Generating trace/trace-hw_pci.h with a custom command
[97/2552] Generating trace/trace-hw_pci.c with a custom command
[98/2552] Generating trace/trace-hw_pci_host.h with a custom command
[99/2552] Generating trace/trace-hw_pci_host.c with a custom command
[100/2552] Generating trace/trace-hw_ppc.h with a custom command
[101/2552] Generating trace/trace-hw_ppc.c with a custom command
[102/2552] Generating trace/trace-hw_rdma.h with a custom command
[103/2552] Generating trace/trace-hw_rdma.c with a custom command
[104/2552] Generating trace/trace-hw_rdma_vmw.h with a custom command
[105/2552] Generating trace/trace-hw_rdma_vmw.c with a custom command
[106/2552] Generating trace/trace-hw_rtc.h with a custom command
[107/2552] Generating trace/trace-hw_rtc.c with a custom command
[108/2552] Generating trace/trace-hw_s390x.h with a custom command
[109/2552] Generating trace/trace-hw_s390x.c with a custom command
[110/2552] Generating trace/trace-hw_scsi.h with a custom command
[111/2552] Generating trace/trace-hw_scsi.c with a custom command
[112/2552] Generating trace/trace-hw_sd.h with a custom command
[113/2552] Generating trace/trace-hw_sd.c with a custom command
[114/2552] Generating trace/trace-hw_sh4.h with a custom command
[115/2552] Generating trace/trace-hw_sh4.c with a custom command
[116/2552] Generating trace/trace-hw_sparc.h with a custom command
[117/2552] Generating trace/trace-hw_sparc.c with a custom command
[118/2552] Generating trace/trace-hw_sparc64.h with a custom command
[119/2552] Generating trace/trace-hw_sparc64.c with a custom command
[120/2552] Generating trace/trace-hw_ssi.h with a custom command
[121/2552] Generating trace/trace-hw_ssi.c with a custom command
[122/2552] Generating trace/trace-hw_timer.h with a custom command
[123/2552] Generating trace/trace-hw_timer.c with a custom command
[124/2552] Generating trace/trace-hw_tpm.c with a custom command
[125/2552] Generating trace/trace-hw_usb.c with a custom command
[126/2552] Generating trace/trace-hw_vfio.h with a custom command
[127/2552] Generating trace/trace-hw_vfio.c with a custom command
[128/2552] Generating trace/trace-hw_virtio.h with a custom command
[129/2552] Generating trace/trace-hw_virtio.c with a custom command
[130/2552] Generating trace/trace-hw_watchdog.h with a custom command
[131/2552] Generating trace/trace-hw_xen.h with a custom command
[132/2552] Generating trace/trace-hw_xen.c with a custom command
[133/2552] Generating trace/trace-hw_gpio.h with a custom command
[134/2552] Generating trace/trace-hw_gpio.c with a custom command
[135/2552] Generating trace/trace-net.h with a custom command
[136/2552] Generating trace/trace-net.c with a custom command
[137/2552] Generating trace/trace-softmmu.h with a custom command
[138/2552] Generating trace/trace-softmmu.c with a custom command
[139/2552] Generating trace/trace-hw_remote.h with a custom command
[140/2552] Generating trace/trace-hw_remote.c with a custom command
[141/2552] Generating trace/trace-accel_tcg.h with a custom command
[142/2552] Generating trace/trace-accel_tcg.c with a custom command
[143/2552] Generating trace/trace-hw_tpm.h with a custom command
[144/2552] Generating trace/trace-hw_usb.h with a custom command
[145/2552] Generating trace/trace-migration.h with a custom command
[146/2552] Generating trace/trace-migration.c with a custom command
[147/2552] Generating trace/trace-ui.h with a custom command
[148/2552] Generating trace/trace-ui.c with a custom command
[149/2552] Generating trace/trace-hw_core.h with a custom command
[150/2552] Generating trace/trace-hw_core.c with a custom command
[151/2552] Generating trace/trace-target_arm.h with a custom command
[152/2552] Generating trace/trace-target_arm.c with a custom command
[153/2552] Generating trace/trace-target_arm_hvf.h with a custom command
[154/2552] Generating trace/trace-target_arm_hvf.c with a custom command
[155/2552] Generating trace/trace-target_hppa.h with a custom command
[156/2552] Generating trace/trace-target_hppa.c with a custom command
[157/2552] Generating trace/trace-target_i386.h with a custom command
[158/2552] Generating trace/trace-target_i386.c with a custom command
[159/2552] Generating trace/trace-target_i386_kvm.h with a custom command
[160/2552] Generating trace/trace-target_i386_kvm.c with a custom command
[161/2552] Generating trace/trace-target_mips_tcg.h with a custom command
[162/2552] Generating trace/trace-target_mips_tcg.c with a custom command
[163/2552] Generating trace/trace-target_nios2.h with a custom command
[164/2552] Generating trace/trace-target_nios2.c with a custom command
[165/2552] Generating trace/trace-target_ppc.h with a custom command
[166/2552] Generating trace/trace-target_ppc.c with a custom command
[167/2552] Generating trace/trace-target_riscv.h with a custom command
[168/2552] Generating trace/trace-target_riscv.c with a custom command
[169/2552] Generating trace/trace-target_s390x.h with a custom command
[170/2552] Generating trace/trace-target_s390x.c with a custom command
[171/2552] Generating trace/trace-target_s390x_kvm.h with a custom command
[172/2552] Generating trace/trace-target_s390x_kvm.c with a custom command
[173/2552] Generating trace/trace-target_sparc.h with a custom command
[174/2552] Generating trace/trace-target_sparc.c with a custom command
[175/2552] Linking static target subprojects/libvhost-user/libvhost-user-glib.a
[176/2552] Generating block/module_block.h with a custom command
[177/2552] Generating block/block-gen.c with a custom command
[178/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_f16.c.o
[179/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_f32.c.o
[180/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_extF80.c.o
[181/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_extF80M.c.o
[182/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_f128.c.o
[183/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_f128M.c.o
[184/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_roundToInt.c.o
[185/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_uint128.c.o
[186/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_uint128_inline.c.o
[187/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_eq128.c.o
[188/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_standardFunctionInfos.c.o
[189/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_functions_common.c.o
[190/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_functionInfos.c.o
[191/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_common.c.o
[192/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_ui32.c.o
[193/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_ui64.c.o
[194/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_fail.c.o
[195/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_i64.c.o
[196/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_verCases_inline.c.o
[197/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_i32.c.o
[198/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_ui64.c.o
[199/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_ab_f32.c.o
[200/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_testLoops_common.c.o
[201/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_abc_f16.c.o
[202/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_abc_f32.c.o
[203/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_writeTestsTotal.c.o
[204/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_ab_f16.c.o
[205/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_abc_f64.c.o
[206/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_abc_f128M.c.o
[207/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_ui32.c.o
[208/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_f16.c.o
[209/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_f32.c.o
[210/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_f64.c.o
[211/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_ab_f64.c.o
[212/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_random.c.o
[213/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_extF80M.c.o
[214/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_ab_extF80M.c.o
[215/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_a_f128M.c.o
[216/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_ab_f128M.c.o
[217/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_bool.c.o
[218/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_ui64.c.o
[219/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_verCases_writeFunctionName.c.o
[220/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_ui32.c.o
[221/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_f16.c.o
[222/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_f32.c.o
[223/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_f64.c.o
[224/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_extF80M.c.o
[225/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeCase_z_f128M.c.o
[226/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_extF80.c.o
[227/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_readHex.c.o
[228/2552] Generating tests/Test QAPI files with a custom command
[229/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_f32.c.o
[230/2552] Generating qemu-img-cmds.h with a custom command (wrapped by meson to capture output)
[231/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui64_z_f16.c.o
[232/2552] Generating qga/QGA QAPI files with a custom command
[233/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_verCases_common.c.o
[234/2552] Generating tests/include/QAPI test (include) with a custom command
[235/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui32_z_f16.c.o
[236/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui64_z_f64.c.o
[237/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_writeHex.c.o
[238/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui32_z_f64.c.o
[239/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i32_z_f32.c.o
[240/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_f16.c.o
[241/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui32_z_extF80.c.o
[242/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui32_z_f32.c.o
[243/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui32_z_f128.c.o
[244/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui64_z_f32.c.o
[245/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i32_z_f16.c.o
[246/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_f64.c.o
[247/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui64_z_f128.c.o
[248/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i32_z_extF80.c.o
[249/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_genCases_f128.c.o
[250/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_ui64_z_extF80.c.o
[251/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i64_z_f64.c.o
[252/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i64_z_f32.c.o
[253/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i32_z_f128.c.o
[254/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_ui32_rx.c.o
[255/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i32_z_f64.c.o
[256/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i64_z_f16.c.o
[257/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_ui64_rx.c.o
[258/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_i32_x.c.o
[259/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i64_z_extF80.c.o
[260/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_i32_rx.c.o
[261/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_ui32_x.c.o
[262/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_ui64_x.c.o
[263/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_f32.c.o
[264/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_i64_rx.c.o
[265/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_i64_x.c.o
[266/2552] Generating hmp-commands.h with a custom command (wrapped by meson to capture output)
[267/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_i64_z_f128.c.o
[268/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f16.c.o
[269/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f16_rx.c.o
[270/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_f64.c.o
[271/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_extF80.c.o
[272/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f16_z_f128.c.o
[273/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abz_f16.c.o
[274/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_ab_f16_z_bool.c.o
[275/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_ui32_rx.c.o
[276/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_ui64_x.c.o
[277/2552] Generating hmp-commands-info.h with a custom command (wrapped by meson to capture output)
[278/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_i64_rx.c.o
[279/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_i64_x.c.o
[280/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abcz_f16.c.o
[281/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_i32_rx.c.o
[282/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_ui32_x.c.o
[283/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_i32_x.c.o
[284/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_f16.c.o
[285/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_ui64_rx.c.o
[286/2552] Generating config-poison.h with a custom command (wrapped by meson to capture output)
[287/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_f64.c.o
[288/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_extF80.c.o
[289/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f32.c.o
[290/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abz_f32.c.o
[291/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_ab_f32_z_bool.c.o
[292/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f32_z_f128.c.o
[293/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f32_rx.c.o
[294/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_ui32_x.c.o
[295/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_i32_x.c.o
[296/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abcz_f32.c.o
[297/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_ui64_rx.c.o
[298/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_ui32_rx.c.o
[299/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_i64_rx.c.o
[300/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_ui64_x.c.o
[301/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_i32_rx.c.o
[302/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_i64_x.c.o
[303/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_f16.c.o
[304/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_f32.c.o
[305/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f64.c.o
[306/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_ab_f64_z_bool.c.o
[307/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f64_rx.c.o
[308/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abz_f64.c.o
[309/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_ui32_rx.c.o
[310/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_ui64_x.c.o
[311/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_i64_rx.c.o
[312/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_le128.c.o
[313/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_extF80.c.o
[314/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_ui64_rx.c.o
[315/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_ui32_x.c.o
[316/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_i64_x.c.o
[317/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shortShiftRightJam64.c.o
[318/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f64_z_f128.c.o
[319/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abcz_f64.c.o
[320/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_i32_rx.c.o
[321/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_i32_x.c.o
[322/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shortShiftRightJam128.c.o
[323/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shortShiftRightJam64Extra.c.o
[324/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shortShiftLeft128.c.o
[325/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_ui32.c.o
[326/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shortShiftRight128.c.o
[327/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_lt128.c.o
[328/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shortShiftRightJam128Extra.c.o
[329/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shiftRightJam32.c.o
[330/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shiftRightJam64.c.o
[331/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_f16.c.o
[332/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_ui64.c.o
[333/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shiftRightJam64Extra.c.o
[334/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shiftRightJam128.c.o
[335/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shiftRightJam128Extra.c.o
[336/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_add128.c.o
[337/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_countLeadingZeros8.c.o
[338/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mul64ByShifted32To128.c.o
[339/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_approxRecip_1Ks.c.o
[340/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_countLeadingZeros16.c.o
[341/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_countLeadingZeros32.c.o
[342/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_sub256M.c.o
[343/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_f32.c.o
[344/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_countLeadingZeros64.c.o
[345/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_f64.c.o
[346/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_add256M.c.o
[347/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_sub128.c.o
[348/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_approxRecipSqrt_1Ks.c.o
[349/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_ui64_rx.c.o
[350/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_ui32_x.c.o
[351/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_extF80.c.o
[352/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_i64_rx.c.o
[353/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mul64To128.c.o
[354/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_approxRecip32_1.c.o
[355/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abz_extF80.c.o
[356/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_ui32_rx.c.o
[357/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_i32_x.c.o
[358/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mul128By32.c.o
[359/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_extF80_rx.c.o
[360/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_ab_extF80_z_bool.c.o
[361/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_shiftRightJam256M.c.o
[362/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundToUI32.c.o
[363/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_i32_rx.c.o
[364/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mul128To256M.c.o
[365/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundToUI64.c.o
[366/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_extF80_z_f128.c.o
[367/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_ui64_x.c.o
[368/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_approxRecipSqrt32_1.c.o
[369/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_i64_x.c.o
[370/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normSubnormalF16Sig.c.o
[371/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_f32.c.o
[372/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_f16.c.o
[373/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_extF80.c.o
[374/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f128.c.o
[375/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_ab_f128_z_bool.c.o
[376/2552] Compiling C object subprojects/libvduse/libvduse.a.p/libvduse.c.o
[377/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_a_f128_z_f64.c.o
[378/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abz_f128.c.o
[379/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundToI32.c.o
[380/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundToI64.c.o
[381/2552] Generating ui/input-keymap-xorgkbd-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[382/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_az_f128_rx.c.o
[383/2552] Generating ui/input-keymap-xorgxquartz-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[384/2552] Compiling C object tests/fp/libtestfloat.a.p/berkeley-testfloat-3_source_test_abcz_f128.c.o
[385/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normRoundPackToF16.c.o
[386/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normSubnormalF32Sig.c.o
[387/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normRoundPackToF32.c.o
[388/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normSubnormalF64Sig.c.o
[389/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundPackToF16.c.o
[390/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normRoundPackToF64.c.o
[391/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundPackToF32.c.o
[392/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_addMagsF32.c.o
[393/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_addMagsF16.c.o
[394/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_subMagsF16.c.o
[395/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normSubnormalExtF80Sig.c.o
[396/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normRoundPackToExtF80.c.o
[397/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_subMagsF32.c.o
[398/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundPackToF64.c.o
[399/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_softfloat_state.c.o
[400/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_extF80M.c.o
[401/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normSubnormalF128Sig.c.o
[402/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_f64.c.o
[403/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_f128.c.o
[404/2552] Linking static target subprojects/libvduse/libvduse.a
[405/2552] Generating ui/shader/texture-blit-flip-vert.h with a custom command (wrapped by meson to capture output)
[406/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_subMagsF64.c.o
[407/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_f32.c.o
[408/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_extF80.c.o
[409/2552] Generating qemu-options.def with a custom command (wrapped by meson to capture output)
[410/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_addMagsF64.c.o
[411/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_addMagsExtF80.c.o
[412/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_subMagsExtF80.c.o
[413/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_normRoundPackToF128.c.o
[414/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_f16.c.o
[415/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui32_to_f128M.c.o
[416/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_f32.c.o
[417/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_extF80M.c.o
[418/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_f128M.c.o
[419/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mulAddF16.c.o
[420/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mulAddF32.c.o
[421/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_addMagsF128.c.o
[422/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_subMagsF128.c.o
[423/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_f16.c.o
[424/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_f64.c.o
[425/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_extF80.c.o
[426/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_f16.c.o
[427/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_f32.c.o
[428/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_f64.c.o
[429/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_f128M.c.o
[430/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_f64.c.o
[431/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_extF80.c.o
[432/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_extF80M.c.o
[433/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundPackToF128.c.o
[434/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_ui64_to_f128.c.o
[435/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_extF80.c.o
[436/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_extF80M.c.o
[437/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i32_to_f128.c.o
[438/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_f16.c.o
[439/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_f128M.c.o
[440/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_i32.c.o
[441/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mulAddF64.c.o
[442/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_roundPackToExtF80.c.o
[443/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_f32.c.o
[444/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_ui64.c.o
[445/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_i64.c.o
[446/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_ui32_r_minMag.c.o
[447/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_i32_r_minMag.c.o
[448/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_extF80M.c.o
[449/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_f128M.c.o
[450/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_sub.c.o
[451/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_mulAdd.c.o
[452/2552] Generating ui/shader/texture-blit-frag.h with a custom command (wrapped by meson to capture output)
[453/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_i64_to_f128.c.o
[454/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_ui32.c.o
[455/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_i64_r_minMag.c.o
[456/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_f32.c.o
[457/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_f64.c.o
[458/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_extF80.c.o
[459/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_f128.c.o
[460/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_add.c.o
[461/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_eq.c.o
[462/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_le.c.o
[463/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_eq_signaling.c.o
[464/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_isSignalingNaN.c.o
[465/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_to_ui64_r_minMag.c.o
[466/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_roundToInt.c.o
[467/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_mul.c.o
[468/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_lt.c.o
[469/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_le_quiet.c.o
[470/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_ui32.c.o
[471/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_i32.c.o
[472/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_ui32_r_minMag.c.o
[473/2552] Generating ui/shader/texture-blit-vert.h with a custom command (wrapped by meson to capture output)
[474/2552] Generating pc-bios/edk2-i386-vars.fd with a custom command (wrapped by meson to capture output)
[475/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_div.c.o
[476/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_rem.c.o
[477/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_sqrt.c.o
[478/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f16_lt_quiet.c.o
[479/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_ui64.c.o
[480/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_i64.c.o
[481/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_ui64_r_minMag.c.o
[482/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_i32_r_minMag.c.o
[483/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_i64_r_minMag.c.o
[484/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_f64.c.o
[485/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_extF80.c.o
[486/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_extF80M.c.o
[487/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_f128.c.o
[488/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_f128M.c.o
[489/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_add.c.o
[490/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_to_f16.c.o
[491/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_sub.c.o
[492/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_mulAdd.c.o
[493/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_s_mulAddF128.c.o
[494/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_le.c.o
[495/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_mul.c.o
[496/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_roundToInt.c.o
[497/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_eq.c.o
[498/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_le_quiet.c.o
[499/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_lt.c.o
[500/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_lt_quiet.c.o
[501/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_sqrt.c.o
[502/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_eq_signaling.c.o
[503/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_i32.c.o
[504/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_div.c.o
[505/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_isSignalingNaN.c.o
[506/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_i64.c.o
[507/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_mulAdd.c.o
[508/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_ui32_r_minMag.c.o
[509/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_add.c.o
[510/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_sub.c.o
[511/2552] Generating ui/input-keymap-xorgxwin-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[512/2552] Generating ui/input-keymap-osx-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[513/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_ui64_r_minMag.c.o
[514/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_ui64_r_minMag.c.o
[515/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_i32_r_minMag.c.o
[516/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_to_i64_r_minMag.c.o
[517/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_eq_signaling.c.o
[518/2552] Generating ui/input-keymap-qcode-to-atset1.c.inc with a custom command (wrapped by meson to capture output)
[519/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f32_rem.c.o
[520/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_eq.c.o
[521/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_le.c.o
[522/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_lt.c.o
[523/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_isSignalingNaN.c.o
[524/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_ui32.c.o
[525/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_sqrt.c.o
[526/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_lt_quiet.c.o
[527/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_i32.c.o
[528/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_add.c.o
[529/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_mul.c.o
[530/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_div.c.o
[531/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_i64.c.o
[532/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_f32.c.o
[533/2552] Generating ui/input-keymap-xorgevdev-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[534/2552] Generating ui/input-keymap-win32-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[535/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_rem.c.o
[536/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f64_le_quiet.c.o
[537/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_ui64.c.o
[538/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_ui32_r_minMag.c.o
[539/2552] Generating ui/input-keymap-linux-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[540/2552] Generating ui/input-keymap-qcode-to-atset2.c.inc with a custom command (wrapped by meson to capture output)
[541/2552] Generating ui/input-keymap-qcode-to-qnum.c.inc with a custom command (wrapped by meson to capture output)
[542/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_i64_r_minMag.c.o
[543/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_f16.c.o
[544/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_f64.c.o
[545/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_isSignalingNaN.c.o
[546/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_sub.c.o
[547/2552] Generating ui/input-keymap-qcode-to-atset3.c.inc with a custom command (wrapped by meson to capture output)
[548/2552] Generating ui/input-keymap-usb-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[549/2552] Linking static target tests/fp/libtestfloat.a
[550/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_roundToInt.c.o
[551/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_i32_r_minMag.c.o
[552/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_to_f128.c.o
[553/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_ui32.c.o
[554/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_isSignalingNaN.c.o
[555/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_ui32.c.o
[556/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_ui64_r_minMag.c.o
[557/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_i64_r_minMag.c.o
[558/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_f16.c.o
[559/2552] Generating ui/input-keymap-qnum-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[560/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_mul.c.o
[561/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_eq.c.o
[562/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_le.c.o
[563/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_lt.c.o
[564/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_eq_signaling.c.o
[565/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_le_quiet.c.o
[566/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_lt_quiet.c.o
[567/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_ui64.c.o
[568/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_i32.c.o
[569/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_i64.c.o
[570/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_ui32_r_minMag.c.o
[571/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_i32_r_minMag.c.o
[572/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_f32.c.o
[573/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_f64.c.o
[574/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_to_f128M.c.o
[575/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_roundToInt.c.o
[576/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_add.c.o
[577/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_sub.c.o
[578/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_mul.c.o
[579/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_div.c.o
[580/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_rem.c.o
[581/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_sqrt.c.o
[582/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_le.c.o
[583/2552] Generating ui/input-keymap-qcode-to-linux.c.inc with a custom command (wrapped by meson to capture output)
[584/2552] Generating ui/input-keymap-qcode-to-sun.c.inc with a custom command (wrapped by meson to capture output)
[585/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_lt.c.o
[586/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_le_quiet.c.o
[587/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_eq.c.o
[588/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_lt_quiet.c.o
[589/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80M_eq_signaling.c.o
[590/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_i64.c.o
[591/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_mulAdd.c.o
[592/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_ui32.c.o
[593/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_i32_r_minMag.c.o
[594/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_f64.c.o
[595/2552] Generating x86_64-softmmu-gdbstub-xml.c with a custom command (wrapped by meson to capture output)
[596/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_ui64_r_minMag.c.o
[597/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_rem.c.o
[598/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_ui64.c.o
[599/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_div.c.o
[600/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_ui32_r_minMag.c.o
[601/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_i32.c.o
[602/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_f32.c.o
[603/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_sub.c.o
[604/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_ui64.c.o
[605/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_i64_r_minMag.c.o
[606/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_add.c.o
[607/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_lt.c.o
[608/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_extF80.c.o
[609/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_f16.c.o
[610/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_eq.c.o
[611/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_extF80M.c.o
[612/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_i64.c.o
[613/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_f16.c.o
[614/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_rem.c.o
[615/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_ui64_r_minMag.c.o
[616/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_to_f64.c.o
[617/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_eq_signaling.c.o
[618/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_i32.c.o
[619/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_i32_r_minMag.c.o
[620/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_extF80_sqrt.c.o
[621/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_le_quiet.c.o
[622/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_lt_quiet.c.o
[623/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_le.c.o
[624/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_ui32_r_minMag.c.o
[625/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_sub.c.o
[626/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_f32.c.o
[627/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_f16UIToCommonNaN.c.o
[628/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_to_i64_r_minMag.c.o
[629/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_le.c.o
[630/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_roundToInt.c.o
[631/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_add.c.o
[632/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_commonNaNToF16UI.c.o
[633/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_extF80M_isSignalingNaN.c.o
[634/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_commonNaNToF64UI.c.o
[635/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_mul.c.o
[636/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_mul.c.o
[637/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_sqrt.c.o
[638/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_div.c.o
[639/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_lt.c.o
[640/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_softfloat_raiseFlags.c.o
[641/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_eq_signaling.c.o
[642/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_eq.c.o
[643/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_f32UIToCommonNaN.c.o
[644/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_commonNaNToF32UI.c.o
[645/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_f128M_isSignalingNaN.c.o
[646/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_mulAdd.c.o
[647/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_le_quiet.c.o
[648/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_propagateNaNF32UI.c.o
[649/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_extF80UIToCommonNaN.c.o
[650/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_roundToInt.c.o
[651/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_f64UIToCommonNaN.c.o
[652/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_commonNaNToExtF80UI.c.o
[653/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_commonNaNToF128UI.c.o
[654/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128M_lt_quiet.c.o
[655/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_f128UIToCommonNaN.c.o
[656/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_sqrt.c.o
[657/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_propagateNaNF64UI.c.o
[658/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_rem.c.o
[659/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_propagateNaNF16UI.c.o
[660/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_f128_div.c.o
[661/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_propagateNaNF128UI.c.o
[662/2552] Generating ui/input-keymap-atset1-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[663/2552] Compiling C object tests/fp/libsoftfloat.a.p/berkeley-softfloat-3_source_8086-SSE_s_propagateNaNExtF80UI.c.o
[664/2552] Generating ui/input-keymap-x11-to-qcode.c.inc with a custom command (wrapped by meson to capture output)
[665/2552] Generating qemu-version.h with a custom command (wrapped by meson to capture output)
[666/2552] Generating pc-bios/edk2-x86_64-secure-code.fd with a custom command (wrapped by meson to capture output)
[667/2552] Compiling C object subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o
[668/2552] Generating pc-bios/edk2-x86_64-code.fd with a custom command (wrapped by meson to capture output)
[669/2552] Generating pc-bios/edk2-i386-code.fd with a custom command (wrapped by meson to capture output)
[670/2552] Compiling C object tests/plugin/libempty.so.p/empty.c.o
[671/2552] Linking static target subprojects/libvhost-user/libvhost-user.a
[672/2552] Generating pc-bios/edk2-i386-secure-code.fd with a custom command (wrapped by meson to capture output)
[673/2552] Linking target tests/plugin/libempty.so
[674/2552] Linking target subprojects/libvhost-user/link-test
[675/2552] Generating storage-daemon/qapi/QAPI files for qemu-storage-daemon with a custom command
[676/2552] Compiling C object tests/plugin/libsyscall.so.p/syscall.c.o
[677/2552] Compiling C object tests/plugin/libbb.so.p/bb.c.o
[678/2552] Compiling C object tests/plugin/libmem.so.p/mem.c.o
[679/2552] Compiling C object tests/plugin/libinsn.so.p/insn.c.o
[680/2552] Linking target tests/plugin/libsyscall.so
[681/2552] Linking target tests/plugin/libbb.so
[682/2552] Generating pc-bios/edk2-arm-vars.fd with a custom command (wrapped by meson to capture output)
[683/2552] Linking target tests/plugin/libmem.so
[684/2552] Linking target tests/plugin/libinsn.so
[685/2552] Linking static target tests/fp/libsoftfloat.a
[686/2552] Generating pc-bios/edk2-arm-code.fd with a custom command (wrapped by meson to capture output)
[687/2552] Generating pc-bios/edk2-aarch64-code.fd with a custom command (wrapped by meson to capture output)
[688/2552] Generating qapi/shared QAPI source files with a custom command
[689/2552] Generating trace/trace-qapi_commands_ui_trace_events.c with a custom command
[690/2552] Generating trace/trace-qapi_commands_machine_target_trace_events.h with a custom command
[691/2552] Generating trace/trace-qapi_commands_authz_trace_events.h with a custom command
[692/2552] Generating trace/trace-qapi_commands_authz_trace_events.c with a custom command
[693/2552] Generating trace/trace-qapi_commands_block_trace_events.h with a custom command
[694/2552] Generating trace/trace-qapi_commands_block_trace_events.c with a custom command
[695/2552] Generating trace/trace-qapi_commands_block_core_trace_events.h with a custom command
[696/2552] Generating trace/trace-qapi_commands_block_core_trace_events.c with a custom command
[697/2552] Generating trace/trace-qapi_commands_block_export_trace_events.h with a custom command
[698/2552] Generating trace/trace-qapi_commands_block_export_trace_events.c with a custom command
[699/2552] Generating trace/trace-qapi_commands_char_trace_events.h with a custom command
[700/2552] Generating trace/trace-qapi_commands_char_trace_events.c with a custom command
[701/2552] Generating trace/trace-qapi_commands_common_trace_events.h with a custom command
[702/2552] Generating trace/trace-qapi_commands_common_trace_events.c with a custom command
[703/2552] Generating trace/trace-qapi_commands_compat_trace_events.h with a custom command
[704/2552] Generating trace/trace-qapi_commands_compat_trace_events.c with a custom command
[705/2552] Generating trace/trace-qapi_commands_control_trace_events.h with a custom command
[706/2552] Generating trace/trace-qapi_commands_control_trace_events.c with a custom command
[707/2552] Generating trace/trace-qapi_commands_crypto_trace_events.h with a custom command
[708/2552] Generating trace/trace-qapi_commands_crypto_trace_events.c with a custom command
[709/2552] Generating trace/trace-qapi_commands_dump_trace_events.h with a custom command
[710/2552] Generating trace/trace-qapi_commands_dump_trace_events.c with a custom command
[711/2552] Generating trace/trace-qapi_commands_error_trace_events.h with a custom command
[712/2552] Generating trace/trace-qapi_commands_error_trace_events.c with a custom command
[713/2552] Generating trace/trace-qapi_commands_introspect_trace_events.h with a custom command
[714/2552] Generating trace/trace-qapi_commands_introspect_trace_events.c with a custom command
[715/2552] Generating trace/trace-qapi_commands_job_trace_events.h with a custom command
[716/2552] Generating trace/trace-qapi_commands_job_trace_events.c with a custom command
[717/2552] Generating trace/trace-qapi_commands_machine_trace_events.h with a custom command
[718/2552] Generating trace/trace-qapi_commands_machine_trace_events.c with a custom command
[719/2552] Generating trace/trace-qapi_commands_migration_trace_events.h with a custom command
[720/2552] Generating trace/trace-qapi_commands_migration_trace_events.c with a custom command
[721/2552] Generating trace/trace-qapi_commands_misc_trace_events.h with a custom command
[722/2552] Generating trace/trace-qapi_commands_misc_trace_events.c with a custom command
[723/2552] Generating trace/trace-qapi_commands_net_trace_events.h with a custom command
[724/2552] Generating trace/trace-qapi_commands_net_trace_events.c with a custom command
[725/2552] Generating trace/trace-qapi_commands_pragma_trace_events.h with a custom command
[726/2552] Generating trace/trace-qapi_commands_pragma_trace_events.c with a custom command
[727/2552] Generating trace/trace-qapi_commands_qom_trace_events.h with a custom command
[728/2552] Generating trace/trace-qapi_commands_qom_trace_events.c with a custom command
[729/2552] Generating trace/trace-qapi_commands_replay_trace_events.h with a custom command
[730/2552] Generating trace/trace-qapi_commands_replay_trace_events.c with a custom command
[731/2552] Generating trace/trace-qapi_commands_run_state_trace_events.h with a custom command
[732/2552] Generating trace/trace-qapi_commands_run_state_trace_events.c with a custom command
[733/2552] Generating trace/trace-qapi_commands_sockets_trace_events.h with a custom command
[734/2552] Generating trace/trace-qapi_commands_sockets_trace_events.c with a custom command
[735/2552] Generating trace/trace-qapi_commands_stats_trace_events.h with a custom command
[736/2552] Generating trace/trace-qapi_commands_stats_trace_events.c with a custom command
[737/2552] Generating trace/trace-qapi_commands_trace_trace_events.h with a custom command
[738/2552] Generating trace/trace-qapi_commands_trace_trace_events.c with a custom command
[739/2552] Generating trace/trace-qapi_commands_transaction_trace_events.h with a custom command
[740/2552] Generating trace/trace-qapi_commands_transaction_trace_events.c with a custom command
[741/2552] Generating trace/trace-qapi_commands_virtio_trace_events.h with a custom command
[742/2552] Generating trace/trace-qapi_commands_virtio_trace_events.c with a custom command
[743/2552] Generating trace/trace-qapi_commands_yank_trace_events.h with a custom command
[744/2552] Generating trace/trace-qapi_commands_yank_trace_events.c with a custom command
[745/2552] Generating trace/trace-qapi_commands_acpi_trace_events.h with a custom command
[746/2552] Generating trace/trace-qapi_commands_acpi_trace_events.c with a custom command
[747/2552] Generating trace/trace-qapi_commands_audio_trace_events.h with a custom command
[748/2552] Generating trace/trace-qapi_commands_audio_trace_events.c with a custom command
[749/2552] Generating trace/trace-qapi_commands_qdev_trace_events.h with a custom command
[750/2552] Generating trace/trace-qapi_commands_qdev_trace_events.c with a custom command
[751/2552] Generating trace/trace-qapi_commands_pci_trace_events.c with a custom command
[752/2552] Generating trace/trace-qapi_commands_rdma_trace_events.h with a custom command
[753/2552] Generating trace/trace-qapi_commands_pci_trace_events.h with a custom command
[754/2552] Generating trace/trace-qapi_commands_rdma_trace_events.c with a custom command
[755/2552] Generating trace/trace-qapi_commands_rocker_trace_events.h with a custom command
[756/2552] Generating trace/trace-qapi_commands_rocker_trace_events.c with a custom command
[757/2552] Generating trace/trace-qapi_commands_tpm_trace_events.h with a custom command
[758/2552] Generating trace/trace-qapi_commands_tpm_trace_events.c with a custom command
[759/2552] Generating trace/trace-qapi_commands_machine_target_trace_events.c with a custom command
[760/2552] Generating trace/trace-qapi_commands_ui_trace_events.h with a custom command
[761/2552] Generating trace/trace-qapi_commands_misc_target_trace_events.h with a custom command
[762/2552] Generating trace/trace-qapi_commands_misc_target_trace_events.c with a custom command
[763/2552] Compiling C object libqom.fa.p/qom_container.c.o
[764/2552] Compiling C object libqom.fa.p/qom_qom-qobject.c.o
[765/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_authz_trace_events.c.o
[766/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_nios2.c.o
[767/2552] Compiling C object libqemuutil.a.p/qapi_qmp-event.c.o
[768/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_ppc.c.o
[769/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-authz.c.o
[770/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_block_export_trace_events.c.o
[771/2552] Generating trace/trace-events-all with a custom command (wrapped by meson to capture output)
[772/2552] Compiling C object libqemuutil.a.p/util_memfd.c.o
[773/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_block_core_trace_events.c.o
[774/2552] Compiling C object libqemuutil.a.p/qobject_qobject.c.o
[775/2552] Compiling C object libcrypto.fa.p/crypto_hmac-glib.c.o
[776/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-run-state.c.o
[777/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qom.c.o
[778/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-rdma.c.o
[779/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_transaction_trace_events.c.o
[780/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-authz.c.o
[781/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-dump.c.o
[782/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-control.c.o
[783/2552] Compiling C object libevent-loop-base.a.p/event-loop-base.c.o
[784/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-block.c.o
[785/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-misc.c.o
[786/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-ui.c.o
[787/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_rocker_trace_events.c.o
[788/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-rocker.c.o
[789/2552] Compiling C object libqom.fa.p/hw_nvram_fw_cfg-interface.c.o
[790/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-authz.c.o
[791/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-common.c.o
[792/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-misc.c.o
[793/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-common.c.o
[794/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-builtin-types.c.o
[795/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-common.c.o
[796/2552] Linking static target libevent-loop-base.a
[797/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-char.c.o
[798/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_trace_trace_events.c.o
[799/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-compat.c.o
[800/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-block.c.o
[801/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-compat.c.o
[802/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-compat.c.o
[803/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-rdma.c.o
[804/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-introspect.c.o
[805/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-rdma.c.o
[806/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-compat.c.o
[807/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-block-export.c.o
[808/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-error.c.o
[809/2552] Compiling C object libqemuutil.a.p/qapi_qmp-registry.c.o
[810/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-control.c.o
[811/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-crypto.c.o
[812/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_qdev_trace_events.c.o
[813/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-error.c.o
[814/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-pci.c.o
[815/2552] Compiling C object libcommon.fa.p/migration_xbzrle.c.o
[816/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-authz.c.o
[817/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-control.c.o
[818/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-error.c.o
[819/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-error.c.o
[820/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-pci.c.o
[821/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-rdma.c.o
[822/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-crypto.c.o
[823/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-dump.c.o
[824/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_pci_trace_events.c.o
[825/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_tpm_trace_events.c.o
[826/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-run-state.c.o
[827/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-block-export.c.o
[828/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-char.c.o
[829/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-sockets.c.o
[830/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-job.c.o
[831/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_machine_target_trace_events.c.o
[832/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-migration.c.o
[833/2552] Compiling C object libcrypto.fa.p/crypto_secret_keyring.c.o
[834/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-job.c.o
[835/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_ui_trace_events.c.o
[836/2552] Compiling C object libqom.fa.p/qom_object_interfaces.c.o
[837/2552] Compiling C object libcommon.fa.p/backends_cryptodev-vhost-user.c.o
[838/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-builtin-visit.c.o
[839/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_rdma_trace_events.c.o
[840/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_acpi_trace_events.c.o
[841/2552] Compiling C object libio.fa.p/io_channel-null.c.o
[842/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_block.c.o
[843/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_misc_target_trace_events.c.o
[844/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-crypto.c.o
[845/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-introspect.c.o
[846/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-introspect.c.o
[847/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-block.c.o
[848/2552] Compiling C object libio.fa.p/io_channel-buffer.c.o
[849/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-common.c.o
[850/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-job.c.o
[851/2552] Compiling C object libcommon.fa.p/migration_page_cache.c.o
[852/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-net.c.o
[853/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-migration.c.o
[854/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-block.c.o
[855/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-machine.c.o
[856/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-dump.c.o
[857/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-ui.c.o
[858/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-control.c.o
[859/2552] Compiling C object libio.fa.p/io_channel-file.c.o
[860/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-dump.c.o
[861/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-net.c.o
[862/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-rocker.c.o
[863/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-misc.c.o
[864/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-machine.c.o
[865/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-block-core.c.o
[866/2552] Compiling C object libqemuutil.a.p/qapi_string-output-visitor.c.o
[867/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-block.c.o
[868/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-block-export.c.o
[869/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-introspect.c.o
[870/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-block-export.c.o
[871/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-run-state.c.o
[872/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-job.c.o
[873/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-char.c.o
[874/2552] Compiling C object libcommon.fa.p/backends_dbus-vmstate.c.o
[875/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-pragma.c.o
[876/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-pragma.c.o
[877/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-pragma.c.o
[878/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-pragma.c.o
[879/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-replay.c.o
[880/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-crypto.c.o
[881/2552] Compiling C object libcommon.fa.p/backends_tpm_tpm_emulator.c.o
[882/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-sockets.c.o
[883/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-sockets.c.o
[884/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-replay.c.o
[885/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-qom.c.o
[886/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-gdbstub.c.o
[887/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-stats.c.o
[888/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-trace.c.o
[889/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-ui.c.o
[890/2552] Compiling C object libcommon.fa.p/migration_vmstate-types.c.o
[891/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-char.c.o
[892/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-replay.c.o
[893/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-virtio.c.o
[894/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_ppc.c.o
[895/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-trace.c.o
[896/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-yank.c.o
[897/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-block-core.c.o
[898/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-acpi.c.o
[899/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-net.c.o
[900/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_virtio_trace_events.c.o
[901/2552] Compiling C object libqemuutil.a.p/qapi_qapi-visit-core.c.o
[902/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-stats.c.o
[903/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-trace.c.o
[904/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-yank.c.o
[905/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_audio_trace_events.c.o
[906/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-misc.c.o
[907/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-audio.c.o
[908/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-audio.c.o
[909/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-acpi.c.o
[910/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-replay.c.o
[911/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-stats.c.o
[912/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_riscv.c.o
[913/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-virtio.c.o
[914/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_s390x.c.o
[915/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-trace.c.o
[916/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-transaction.c.o
[917/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-run-state.c.o
[918/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_dma.c.o
[919/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-acpi.c.o
[920/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-tpm.c.o
[921/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-transaction.c.o
[922/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-scsi.c.o
[923/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-yank.c.o
[924/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-pci.c.o
[925/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-tpm.c.o
[926/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-acpi.c.o
[927/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi.c.o
[928/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-qom.c.o
[929/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-rocker.c.o
[930/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-yank.c.o
[931/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-audio.c.o
[932/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-root.c.o
[933/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-types-qdev.c.o
[934/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-authz.c.o
[935/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-monitor.c.o
[936/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-backends.c.o
[937/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-transaction.c.o
[938/2552] Compiling C object libcommon.fa.p/backends_tpm_tpm_passthrough.c.o
[939/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_block_dataplane.c.o
[940/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-crypto.c.o
[941/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-accel_kvm.c.o
[942/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-ebpf.c.o
[943/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_adc.c.o
[944/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-net.c.o
[945/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-nbd.c.o
[946/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_audio.c.o
[947/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-audio.c.o
[948/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-backends_tpm.c.o
[949/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_i2c.c.o
[950/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-qdev.c.o
[951/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-util.c.o
[952/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_alpha.c.o
[953/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_arm.c.o
[954/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-io.c.o
[955/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-chardev.c.o
[956/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_9pfs.c.o
[957/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_char.c.o
[958/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_hyperv.c.o
[959/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-migration.c.o
[960/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-events-qdev.c.o
[961/2552] Compiling C object libcommon.fa.p/backends_tpm_tpm_util.c.o
[962/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_mips.c.o
[963/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_acpi.c.o
[964/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_i386.c.o
[965/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_rdma_vmw.c.o
[966/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_i386_xen.c.o
[967/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-tpm.c.o
[968/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-tpm.c.o
[969/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-sockets.c.o
[970/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_input.c.o
[971/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-stats.c.o
[972/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-virtio.c.o
[973/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-qdev.c.o
[974/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_display.c.o
[975/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_isa.c.o
[976/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_ide.c.o
[977/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-qom.c.o
[978/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-rocker.c.o
[979/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_mem.c.o
[980/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_rtc.c.o
[981/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-pci.c.o
[982/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_intc.c.o
[983/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_net_can.c.o
[984/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-virtio.c.o
[985/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_misc.c.o
[986/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-transaction.c.o
[987/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_nvram.c.o
[988/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_pci_host.c.o
[989/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_nubus.c.o
[990/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_misc_macio.c.o
[991/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_pci.c.o
[992/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_s390x.c.o
[993/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_sd.c.o
[994/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_rdma.c.o
[995/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_nvme.c.o
[996/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_sh4.c.o
[997/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-audio.c.o
[998/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_ssi.c.o
[999/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-ui.c.o
[1000/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_sparc.c.o
[1001/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_net.c.o
[1002/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_sparc64.c.o
[1003/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_vfio.c.o
[1004/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_timer.c.o
[1005/2552] Compiling C object libqemuutil.a.p/qapi_qapi-clone-visitor.c.o
[1006/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_virtio.c.o
[1007/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_xen.c.o
[1008/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_tpm.c.o
[1009/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_watchdog.c.o
[1010/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_gpio.c.o
[1011/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_scsi.c.o
[1012/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-migration.c.o
[1013/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_core.c.o
[1014/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-machine.c.o
[1015/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-net.c.o
[1016/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-accel_tcg.c.o
[1017/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_arm.c.o
[1018/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-ui.c.o
[1019/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-softmmu.c.o
[1020/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_i386_kvm.c.o
[1021/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_remote.c.o
[1022/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_hppa.c.o
[1023/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_arm_hvf.c.o
[1024/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_i386.c.o
[1025/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_sparc.c.o
[1026/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_introspect_trace_events.c.o
[1027/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_mips_tcg.c.o
[1028/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_compat_trace_events.c.o
[1029/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_block_trace_events.c.o
[1030/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_common_trace_events.c.o
[1031/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_char_trace_events.c.o
[1032/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-hw_usb.c.o
[1033/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-target_s390x_kvm.c.o
[1034/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_error_trace_events.c.o
[1035/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_control_trace_events.c.o
[1036/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_dump_trace_events.c.o
[1037/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_job_trace_events.c.o
[1038/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_pragma_trace_events.c.o
[1039/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-migration.c.o
[1040/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_replay_trace_events.c.o
[1041/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_crypto_trace_events.c.o
[1042/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_stats_trace_events.c.o
[1043/2552] Compiling C object libcommon.fa.p/migration_yank_functions.c.o
[1044/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_machine_trace_events.c.o
[1045/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_net_trace_events.c.o
[1046/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_qom_trace_events.c.o
[1047/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_misc_trace_events.c.o
[1048/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_migration_trace_events.c.o
[1049/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_run_state_trace_events.c.o
[1050/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_sockets_trace_events.c.o
[1051/2552] Compiling C object libqemuutil.a.p/meson-generated_.._trace_trace-qapi_commands_yank_trace_events.c.o
[1052/2552] Compiling C object libqemuutil.a.p/qapi_qapi-type-helpers.c.o
[1053/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-machine.c.o
[1054/2552] Compiling C object libqemuutil.a.p/qobject_qnull.c.o
[1055/2552] Compiling C object libqemuutil.a.p/qobject_json-lexer.c.o
[1056/2552] Compiling C object libqemuutil.a.p/stubs_graph-lock.c.o
[1057/2552] Compiling C object libqemuutil.a.p/qapi_qapi-util.c.o
[1058/2552] Compiling C object libqemuutil.a.p/qobject_qbool.c.o
[1059/2552] Compiling C object libio.fa.p/io_channel-command.c.o
[1060/2552] Compiling C object libqemuutil.a.p/stubs_get-vm-name.c.o
[1061/2552] Compiling C object libqemuutil.a.p/qobject_qstring.c.o
[1062/2552] Compiling C object libqemuutil.a.p/qobject_json-streamer.c.o
[1063/2552] Compiling C object libqemuutil.a.p/qapi_qapi-dealloc-visitor.c.o
[1064/2552] Compiling C object libqemuutil.a.p/util_qemu-timer-common.c.o
[1065/2552] Compiling C object libqemuutil.a.p/util_unicode.c.o
[1066/2552] Compiling C object libqemuutil.a.p/qobject_qlit.c.o
[1067/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-qom.c.o
[1068/2552] Compiling C object libqemuutil.a.p/util_compatfd.c.o
[1069/2552] Compiling C object libqemuutil.a.p/util_async-teardown.c.o
[1070/2552] Compiling C object libqom.fa.p/qom_object.c.o
[1071/2552] Compiling C object libqemuutil.a.p/qobject_qjson.c.o
[1072/2552] Compiling C object libqemuutil.a.p/qobject_qlist.c.o
[1073/2552] Compiling C object libqemuutil.a.p/qobject_qnum.c.o
[1074/2552] Linking static target libqom.fa
[1075/2552] Compiling C object libqemuutil.a.p/qapi_qobject-output-visitor.c.o
[1076/2552] Compiling C object libqemuutil.a.p/util_notify.c.o
[1077/2552] Compiling C object libqemuutil.a.p/qobject_json-writer.c.o
[1078/2552] Compiling C object libqemuutil.a.p/util_fdmon-epoll.c.o
[1079/2552] Compiling C object libqemuutil.a.p/util_event_notifier-posix.c.o
[1080/2552] Compiling C object libqemuutil.a.p/util_bitops.c.o
[1081/2552] Compiling C object libqemuutil.a.p/util_path.c.o
[1082/2552] Compiling C object libqemuutil.a.p/util_fifo8.c.o
[1083/2552] Compiling C object libqemuutil.a.p/qapi_string-input-visitor.c.o
[1084/2552] Compiling C object libqemuutil.a.p/util_mmap-alloc.c.o
[1085/2552] Compiling C object libqemuutil.a.p/qapi_qapi-forward-visitor.c.o
[1086/2552] Compiling C object libqemuutil.a.p/util_fdmon-poll.c.o
[1087/2552] Compiling C object libqemuutil.a.p/util_cacheflush.c.o
[1088/2552] Compiling C object libqemuutil.a.p/util_envlist.c.o
[1089/2552] Compiling C object libqemuutil.a.p/util_id.c.o
[1090/2552] Compiling C object libqemuutil.a.p/qapi_opts-visitor.c.o
[1091/2552] Compiling C object libqemuutil.a.p/util_module.c.o
[1092/2552] Compiling C object libqemuutil.a.p/util_crc32c.c.o
[1093/2552] Compiling C object libcommon.fa.p/hw_pci_pcie_doe.c.o
[1094/2552] Compiling C object libqemuutil.a.p/util_host-utils.c.o
[1095/2552] Compiling C object libcommon.fa.p/hw_pci_pci_host.c.o
[1096/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-commands-block-core.c.o
[1097/2552] Compiling C object libqemuutil.a.p/util_crc-ccitt.c.o
[1098/2552] Compiling C object libqemuutil.a.p/util_qemu-progress.c.o
[1099/2552] Compiling C object libqemuutil.a.p/util_getauxval.c.o
[1100/2552] Compiling C object libqemuutil.a.p/util_stats64.c.o
[1101/2552] Compiling C object libqemuutil.a.p/qapi_qmp-dispatch.c.o
[1102/2552] Compiling C object libqemuutil.a.p/util_uuid.c.o
[1103/2552] Compiling C object libqemuutil.a.p/util_thread-context.c.o
[1104/2552] Compiling C object libqemuutil.a.p/qobject_qdict.c.o
[1105/2552] Compiling C object libqemuutil.a.p/util_error.c.o
[1106/2552] Compiling C object libqemuutil.a.p/util_qemu-print.c.o
[1107/2552] Compiling C object libqemuutil.a.p/qobject_json-parser.c.o
[1108/2552] Compiling C object libqemuutil.a.p/util_osdep.c.o
[1109/2552] Compiling C object libqemuutil.a.p/util_range.c.o
[1110/2552] Compiling C object libqemuutil.a.p/util_drm.c.o
[1111/2552] Compiling C object libqemuutil.a.p/qobject_block-qdict.c.o
[1112/2552] Compiling C object libqemuutil.a.p/util_transactions.c.o
[1113/2552] Compiling C object libqemuutil.a.p/util_int128.c.o
[1114/2552] Compiling C object libqemuutil.a.p/util_systemd.c.o
[1115/2552] Compiling C object libqemuutil.a.p/util_bitmap.c.o
[1116/2552] Compiling C object libqemuutil.a.p/util_aiocb.c.o
[1117/2552] Compiling C object libcommon.fa.p/hw_pci_pcie_aer.c.o
[1118/2552] Compiling C object libqemuutil.a.p/qapi_qobject-input-visitor.c.o
[1119/2552] Compiling C object libqemuutil.a.p/util_oslib-posix.c.o
[1120/2552] Compiling C object libqemuutil.a.p/util_base64.c.o
[1121/2552] Compiling C object libqemuutil.a.p/util_rcu.c.o
[1122/2552] Compiling C object libqemuutil.a.p/util_memalign.c.o
[1123/2552] Compiling C object libqemuutil.a.p/util_error-report.c.o
[1124/2552] Compiling C object libqemuutil.a.p/util_guest-random.c.o
[1125/2552] Compiling C object libqemuutil.a.p/util_keyval.c.o
[1126/2552] Compiling C object libqemuutil.a.p/util_qemu-coroutine-sleep.c.o
[1127/2552] Compiling C object libqemuutil.a.p/util_yank.c.o
[1128/2552] Compiling C object libqemuutil.a.p/util_cutils.c.o
[1129/2552] Compiling C object libqemuutil.a.p/util_coroutine-ucontext.c.o
[1130/2552] Compiling C object libqemuutil.a.p/util_qemu-thread-posix.c.o
[1131/2552] Compiling C object libqemuutil.a.p/crypto_sm4.c.o
[1132/2552] Compiling C object libqemuutil.a.p/util_hexdump.c.o
[1133/2552] Compiling C object libqemuutil.a.p/util_nvdimm-utils.c.o
[1134/2552] Compiling C object libqemuutil.a.p/util_log.c.o
[1135/2552] Compiling C object libqemuutil.a.p/util_aio-wait.c.o
[1136/2552] Compiling C object libqemuutil.a.p/util_block-helpers.c.o
[1137/2552] Compiling C object libqemuutil.a.p/crypto_init.c.o
[1138/2552] Compiling C object libqemuutil.a.p/util_qemu-co-shared-resource.c.o
[1139/2552] Compiling C object libqemuutil.a.p/util_qemu-coroutine-io.c.o
[1140/2552] Compiling C object libqemuutil.a.p/util_qht.c.o
[1141/2552] Compiling C object libqemuutil.a.p/util_qemu-coroutine.c.o
[1142/2552] Compiling C object libqemuutil.a.p/util_qemu-co-timeout.c.o
[1143/2552] Compiling C object libqemuutil.a.p/stubs_change-state-handler.c.o
[1144/2552] Compiling C object libqemuutil.a.p/crypto_random-platform.c.o
[1145/2552] Compiling C object libqemuutil.a.p/stubs_dump.c.o
[1146/2552] Compiling C object libqemuutil.a.p/util_timed-average.c.o
[1147/2552] Compiling C object libqemuutil.a.p/util_qdist.c.o
[1148/2552] Compiling C object libqemuutil.a.p/stubs_module-opts.c.o
[1149/2552] Compiling C object libqemuutil.a.p/stubs_gdbstub.c.o
[1150/2552] Compiling C object libqemuutil.a.p/util_aio-posix.c.o
[1151/2552] Compiling C object libqemuutil.a.p/util_userfaultfd.c.o
[1152/2552] Compiling C object libqemuutil.a.p/stubs_blk-exp-close-all.c.o
[1153/2552] Compiling C object libqemuutil.a.p/stubs_cmos.c.o
[1154/2552] Compiling C object libqemuutil.a.p/stubs_qemu-timer-notify-cb.c.o
[1155/2552] Compiling C object libqemuutil.a.p/stubs_cpu-get-clock.c.o
[1156/2552] Compiling C object libqemuutil.a.p/stubs_icount.c.o
[1157/2552] Compiling C object libqemuutil.a.p/util_lockcnt.c.o
[1158/2552] Compiling C object libqemuutil.a.p/util_buffer.c.o
[1159/2552] Compiling C object libqemuutil.a.p/util_iova-tree.c.o
[1160/2552] Compiling C object libqemuutil.a.p/stubs_is-daemonized.c.o
[1161/2552] Compiling C object libqemuutil.a.p/stubs_cpus-get-virtual-clock.c.o
[1162/2552] Compiling C object libqemuutil.a.p/stubs_migr-blocker.c.o
[1163/2552] Compiling C object libqemuutil.a.p/stubs_iothread-lock-block.c.o
[1164/2552] Compiling C object libqemuutil.a.p/stubs_physmem.c.o
[1165/2552] Compiling C object libqemuutil.a.p/util_qemu-config.c.o
[1166/2552] Compiling C object libqemuutil.a.p/stubs_bdrv-next-monitor-owned.c.o
[1167/2552] Compiling C object libqemuutil.a.p/stubs_iothread-lock.c.o
[1168/2552] Compiling C object libqemuutil.a.p/util_interval-tree.c.o
[1169/2552] Compiling C object libqemuutil.a.p/stubs_blk-commit-all.c.o
[1170/2552] Compiling C object libqemuutil.a.p/trace_qmp.c.o
[1171/2552] Compiling C object libqemuutil.a.p/util_qemu-coroutine-lock.c.o
[1172/2552] Compiling C object libqemuutil.a.p/util_main-loop.c.o
[1173/2552] Compiling C object libqemuutil.a.p/stubs_blockdev-close-all-bdrv-states.c.o
[1174/2552] Compiling C object libqemuutil.a.p/stubs_ramfb.c.o
[1175/2552] Compiling C object libqemuutil.a.p/util_thread-pool.c.o
[1176/2552] Compiling C object libqemuutil.a.p/stubs_sysbus.c.o
[1177/2552] Compiling C object libqemuutil.a.p/stubs_runstate-check.c.o
[1178/2552] Compiling C object libqemuutil.a.p/util_dbus.c.o
[1179/2552] Compiling C object libqemuutil.a.p/stubs_isa-bus.c.o
[1180/2552] Compiling C object libqemuutil.a.p/stubs_qtest.c.o
[1181/2552] Compiling C object libqemuutil.a.p/stubs_target-monitor-defs.c.o
[1182/2552] Compiling C object libqemuutil.a.p/stubs_trace-control.c.o
[1183/2552] Compiling C object libqemuutil.a.p/util_filemonitor-inotify.c.o
[1184/2552] Compiling C object libqemuutil.a.p/stubs_fdset.c.o
[1185/2552] Compiling C object libqemuutil.a.p/stubs_qmp_memory_device.c.o
[1186/2552] Compiling C object libqemuutil.a.p/stubs_target-get-monitor-def.c.o
[1187/2552] Compiling C object libqemuutil.a.p/stubs_semihost.c.o
[1188/2552] Compiling C object libqemuutil.a.p/util_qsp.c.o
[1189/2552] Compiling C object libqemuutil.a.p/stubs_vmstate.c.o
[1190/2552] Compiling C object libqemuutil.a.p/stubs_win32-kbd-hook.c.o
[1191/2552] Compiling C object libqemuutil.a.p/stubs_vmgenid.c.o
[1192/2552] Compiling C object libqemuutil.a.p/stubs_uuid.c.o
[1193/2552] Compiling C object libqemuutil.a.p/stubs_vm-stop.c.o
[1194/2552] Compiling C object libqemuutil.a.p/stubs_monitor-core.c.o
[1195/2552] Compiling C object libqemuutil.a.p/stubs_error-printf.c.o
[1196/2552] Compiling C object libqemuutil.a.p/util_readline.c.o
[1197/2552] Compiling C object libqemuutil.a.p/util_vhost-user-server.c.o
[1198/2552] Compiling C object libqemuutil.a.p/util_throttle.c.o
[1199/2552] Compiling C object libqemuutil.a.p/crypto_aes.c.o
[1200/2552] Compiling C object libqemuutil.a.p/stubs_ram-block.c.o
[1201/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/malloc-spapr.c.o
[1202/2552] Compiling C object libqemuutil.a.p/stubs_cpu-synchronize-state.c.o
[1203/2552] Compiling C object libqemuutil.a.p/util_qemu-timer.c.o
[1204/2552] Compiling C object libqemuutil.a.p/trace_control.c.o
[1205/2552] Compiling C object libqemuutil.a.p/stubs_monitor.c.o
[1206/2552] Compiling C object libqemuutil.a.p/stubs_vfio-user-obj.c.o
[1207/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/sdhci-cmd.c.o
[1208/2552] Compiling C object libqemuutil.a.p/stubs_qmp-quit.c.o
[1209/2552] Compiling C object libqemuutil.a.p/stubs_replay.c.o
[1210/2552] Compiling C object libqemuutil.a.p/stubs_replay-tools.c.o
[1211/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/malloc-pc.c.o
[1212/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/libqos-spapr.c.o
[1213/2552] Compiling C object libqemuutil.a.p/stubs_qmp-command-available.c.o
[1214/2552] Compiling C object libqemuutil.a.p/util_async.c.o
[1215/2552] Compiling C object libqemuutil.a.p/util_qemu-option.c.o
[1216/2552] Compiling C object libqemuutil.a.p/util_iov.c.o
[1217/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/libqos-pc.c.o
[1218/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/rtas.c.o
[1219/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/usb.c.o
[1220/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/i2c.c.o
[1221/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/qos_external.c.o
[1222/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/.._libqmp.c.o
[1223/2552] Compiling C object libqemuutil.a.p/stubs_fw_cfg.c.o
[1224/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/pci-pc.c.o
[1225/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-imx25-pdk-machine.c.o
[1226/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/libqos-malloc.c.o
[1227/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/libqos.c.o
[1228/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/tpci200.c.o
[1229/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/pci-spapr.c.o
[1230/2552] Compiling C object libqemuutil.a.p/util_hbitmap.c.o
[1231/2552] Compiling C object libqemuutil.a.p/stubs_usb-dev-stub.c.o
[1232/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/vhost-user-blk.c.o
[1233/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/e1000e.c.o
[1234/2552] Compiling C object libqemuutil.a.p/stubs_pci-bus.c.o
[1235/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-balloon.c.o
[1236/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/i2c-omap.c.o
[1237/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-blk.c.o
[1238/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/fw_cfg.c.o
[1239/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-rng.c.o
[1240/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-scsi.c.o
[1241/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/aarch64-xlnx-zcu102-machine.c.o
[1242/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-mmio.c.o
[1243/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-serial.c.o
[1244/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-n800-machine.c.o
[1245/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/i2c-imx.c.o
[1246/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/qgraph.c.o
[1247/2552] Compiling C object libqemuutil.a.p/util_vfio-helpers.c.o
[1248/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-raspi2-machine.c.o
[1249/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-smdkc210-machine.c.o
[1250/2552] Compiling C object libqemuutil.a.p/util_qemu-sockets.c.o
[1251/2552] Compiling C object libqemuutil.a.p/stubs_xen-hw-stub.c.o
[1252/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-gpio.c.o
[1253/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-sabrelite-machine.c.o
[1254/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-virt-machine.c.o
[1255/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/arm-xilinx-zynq-a9-machine.c.o
[1256/2552] Compiling C object libauthz.fa.p/authz_base.c.o
[1257/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/ppc64_pseries-machine.c.o
[1258/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/x86_64_pc-machine.c.o
[1259/2552] Compiling C object libcrypto.fa.p/crypto_ivgen-plain64.c.o
[1260/2552] Compiling C object libcrypto.fa.p/crypto_akcipher.c.o
[1261/2552] Compiling C object libcrypto.fa.p/crypto_ivgen-plain.c.o
[1262/2552] Compiling C object libqemuutil.a.p/util_bufferiszero.c.o
[1263/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-pci-modern.c.o
[1264/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/generic-pcihost.c.o
[1265/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-net.c.o
[1266/2552] Compiling C object libcrypto.fa.p/crypto_afsplit.c.o
[1267/2552] Compiling C object libcrypto.fa.p/crypto_block-qcow.c.o
[1268/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/sdhci.c.o
[1269/2552] Compiling C object libcrypto.fa.p/crypto_ivgen.c.o
[1270/2552] Compiling C object libcrypto.fa.p/crypto_pbkdf.c.o
[1271/2552] Compiling C object libcrypto.fa.p/crypto_hash.c.o
[1272/2552] Compiling C object libcrypto.fa.p/crypto_hmac.c.o
[1273/2552] Compiling C object libcrypto.fa.p/crypto_pbkdf-stub.c.o
[1274/2552] Compiling C object libcrypto.fa.p/crypto_ivgen-essiv.c.o
[1275/2552] Compiling C object libcrypto.fa.p/crypto_tlscreds.c.o
[1276/2552] Compiling C object libcrypto.fa.p/crypto_hash-glib.c.o
[1277/2552] Compiling C object libcrypto.fa.p/crypto_tlssession.c.o
[1278/2552] Compiling C object libcrypto.fa.p/crypto_der.c.o
[1279/2552] Compiling C object libauthz.fa.p/authz_simple.c.o
[1280/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/pci.c.o
[1281/2552] Compiling C object libqemuutil.a.p/util_uri.c.o
[1282/2552] Compiling C object libcrypto.fa.p/crypto_rsakey.c.o
[1283/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio.c.o
[1284/2552] Compiling C object libcrypto.fa.p/crypto_cipher.c.o
[1285/2552] Compiling C object libcrypto.fa.p/crypto_secret.c.o
[1286/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-iommu.c.o
[1287/2552] Compiling C object libauthz.fa.p/authz_list.c.o
[1288/2552] Compiling C object libio.fa.p/io_channel-util.c.o
[1289/2552] Compiling C object libcrypto.fa.p/crypto_tlscredsanon.c.o
[1290/2552] Compiling C object libcrypto.fa.p/crypto_tlscredsx509.c.o
[1291/2552] Compiling C object libauthz.fa.p/authz_listfile.c.o
[1292/2552] Compiling C object libcrypto.fa.p/crypto_tlscredspsk.c.o
[1293/2552] Compiling C object libcrypto.fa.p/crypto_block.c.o
[1294/2552] Compiling C object libcommon.fa.p/backends_hostmem-epc.c.o
[1295/2552] Compiling C object libio.fa.p/io_channel-watch.c.o
[1296/2552] Compiling C object libmigration.fa.p/migration_xbzrle.c.o
[1297/2552] Compiling C object libcrypto.fa.p/crypto_secret_common.c.o
[1298/2552] Compiling C object libmigration.fa.p/migration_yank_functions.c.o
[1299/2552] Compiling C object libio.fa.p/io_dns-resolver.c.o
[1300/2552] Linking static target libauthz.fa
[1301/2552] Compiling C object libcommon.fa.p/backends_tpm_tpm_backend.c.o
[1302/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/virtio-pci.c.o
[1303/2552] Compiling C object libio.fa.p/io_task.c.o
[1304/2552] Compiling C object libmigration.fa.p/migration_page_cache.c.o
[1305/2552] Compiling C object libcommon.fa.p/backends_cryptodev-vhost.c.o
[1306/2552] Compiling C object libblock.fa.p/replication.c.o
[1307/2552] Compiling C object libio.fa.p/io_net-listener.c.o
[1308/2552] Compiling C object libcommon.fa.p/migration_channel-block.c.o
[1309/2552] Compiling C object libblock.fa.p/block_aio_task.c.o
[1310/2552] Compiling C object libio.fa.p/io_channel-tls.c.o
[1311/2552] Compiling C object libblock.fa.p/scsi_utils.c.o
[1312/2552] Compiling C object libblock.fa.p/block_progress_meter.c.o
[1313/2552] Compiling C object libblock.fa.p/block_copy-before-write.c.o
[1314/2552] Compiling C object libblock.fa.p/scsi_pr-manager.c.o
[1315/2552] Compiling C object libblock.fa.p/block_amend.c.o
[1316/2552] Compiling C object libblock.fa.p/nbd_client-connection.c.o
[1317/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/.._libqtest.c.o
[1318/2552] Compiling C object libblock.fa.p/nbd_common.c.o
[1319/2552] Compiling C object libblock.fa.p/block_blkverify.c.o
[1320/2552] Compiling C object libblock.fa.p/block_reqlist.c.o
[1321/2552] Compiling C object libio.fa.p/io_channel.c.o
[1322/2552] Compiling C object tests/qtest/libqos/libqos.fa.p/ahci.c.o
[1323/2552] Compiling C object libblock.fa.p/scsi_pr-manager-helper.c.o
[1324/2552] Compiling C object libblock.fa.p/block_accounting.c.o
[1325/2552] Compiling C object libblock.fa.p/block_copy-on-read.c.o
[1326/2552] Compiling C object libblock.fa.p/block_graph-lock.c.o
[1327/2552] Compiling C object libblock.fa.p/block_filter-compress.c.o
[1328/2552] Compiling C object libblock.fa.p/block_create.c.o
[1329/2552] Compiling C object libio.fa.p/io_channel-socket.c.o
[1330/2552] Compiling C object libblock.fa.p/block_blklogwrites.c.o
[1331/2552] Compiling C object libblock.fa.p/block_backup.c.o
[1332/2552] Compiling C object libcrypto.fa.p/crypto_block-luks.c.o
[1333/2552] Compiling C object libblock.fa.p/block_preallocate.c.o
[1334/2552] Compiling C object libblock.fa.p/block_snapshot-access.c.o
[1335/2552] Linking static target tests/qtest/libqos/libqos.fa
[1336/2552] Compiling C object libblock.fa.p/block_vhdx-endian.c.o
[1337/2552] Compiling C object libblock.fa.p/block_null.c.o
[1338/2552] Compiling C object libblock.fa.p/block_write-threshold.c.o
[1339/2552] Compiling C object libmigration.fa.p/migration_vmstate-types.c.o
[1340/2552] Compiling C object libblock.fa.p/block_qcow2-threads.c.o
[1341/2552] Compiling C object libmigration.fa.p/migration_vmstate.c.o
[1342/2552] Linking static target libcrypto.fa
[1343/2552] Compiling C object libblock.fa.p/blockjob.c.o
[1344/2552] Compiling C object libchardev.fa.p/chardev_char-file.c.o
[1345/2552] Compiling C object libblock.fa.p/block_commit.c.o
[1346/2552] Compiling C object libchardev.fa.p/chardev_char-null.c.o
[1347/2552] Compiling C object libblock.fa.p/block_qed-cluster.c.o
[1348/2552] Compiling C object libmigration.fa.p/migration_qemu-file.c.o
[1349/2552] Compiling C object libblock.fa.p/block_bochs.c.o
[1350/2552] Compiling C object libblock.fa.p/block_throttle.c.o
[1351/2552] Compiling C object libchardev.fa.p/chardev_char-io.c.o
[1352/2552] Compiling C object libblock.fa.p/block_qed-l2-cache.c.o
[1353/2552] Compiling C object libblockdev.fa.p/os-posix.c.o
[1354/2552] Compiling C object libblock.fa.p/block_cloop.c.o
[1355/2552] Compiling C object libio.fa.p/io_channel-websock.c.o
[1356/2552] Compiling C object libblock.fa.p/block_qed-check.c.o
[1357/2552] Compiling C object libchardev.fa.p/chardev_char-stdio.c.o
[1358/2552] Linking static target libmigration.fa
[1359/2552] Compiling C object libhwcore.fa.p/hw_core_reset.c.o
[1360/2552] Compiling C object libhwcore.fa.p/hw_core_vmstate-if.c.o
[1361/2552] Compiling C object libblock.fa.p/block_raw-format.c.o
[1362/2552] Compiling C object libblockdev.fa.p/block_export_virtio-blk-handler.c.o
[1363/2552] Compiling C object libchardev.fa.p/chardev_char-pipe.c.o
[1364/2552] Compiling C object libcommon.fa.p/page-vary-common.c.o
[1365/2552] Compiling C object libchardev.fa.p/chardev_char-ringbuf.c.o
[1366/2552] Compiling C object libblock.fa.p/block_blkdebug.c.o
[1367/2552] Linking static target libio.fa
[1368/2552] Compiling C object libblock.fa.p/block_monitor_bitmap-qmp-cmds.c.o
[1369/2552] Compiling C object libhwcore.fa.p/hw_core_hotplug.c.o
[1370/2552] Compiling C object libblock.fa.p/block_crypto.c.o
[1371/2552] Compiling C object libblock.fa.p/block_parallels-ext.c.o
[1372/2552] Compiling C object libchardev.fa.p/chardev_char-udp.c.o
[1373/2552] Compiling C object libblock.fa.p/block_qed-table.c.o
[1374/2552] Compiling C object libblockdev.fa.p/job-qmp.c.o
[1375/2552] Compiling C object libchardev.fa.p/chardev_char-serial.c.o
[1376/2552] Compiling C object libhwcore.fa.p/hw_core_irq.c.o
[1377/2552] Compiling C object libblock.fa.p/block_stream.c.o
[1378/2552] Compiling C object libqmp.fa.p/qom_qom-qmp-cmds.c.o
[1379/2552] Compiling C object libblockdev.fa.p/block_export_vduse-blk.c.o
[1380/2552] Compiling C object libchardev.fa.p/chardev_char-fd.c.o
[1381/2552] Compiling C object libblockdev.fa.p/iothread.c.o
[1382/2552] Compiling C object libqmp.fa.p/monitor_qmp-cmds-control.c.o
[1383/2552] Compiling C object libchardev.fa.p/chardev_char-parallel.c.o
[1384/2552] Compiling C object libblock.fa.p/block_qapi.c.o
[1385/2552] Compiling C object libchardev.fa.p/chardev_char-fe.c.o
[1386/2552] Compiling C object libblockdev.fa.p/blockdev-nbd.c.o
[1387/2552] Compiling C object libchardev.fa.p/chardev_char-pty.c.o
[1388/2552] Compiling C object libblock.fa.p/block_dmg.c.o
[1389/2552] Compiling C object libblockdev.fa.p/block_export_vhost-user-blk-server.c.o
[1390/2552] Compiling C object libblock.fa.p/block_qcow2-snapshot.c.o
[1391/2552] Compiling C object libblock.fa.p/block_qcow2-cache.c.o
[1392/2552] Compiling C object libblock.fa.p/block_snapshot.c.o
[1393/2552] Compiling C object libblock.fa.p/block_vdi.c.o
[1394/2552] Compiling C object libblockdev.fa.p/block_export_export.c.o
[1395/2552] Compiling C object libhwcore.fa.p/hw_core_bus.c.o
[1396/2552] Compiling C object libhwcore.fa.p/hw_core_clock.c.o
[1397/2552] Compiling C object libhwcore.fa.p/hw_core_qdev-clock.c.o
[1398/2552] Compiling C object libcommon.fa.p/ui_input-keymap.c.o
[1399/2552] Compiling C object libcommon.fa.p/migration_multifd.c.o
[1400/2552] Compiling C object libblock.fa.p/job.c.o
[1401/2552] Compiling C object libblock.fa.p/block_throttle-groups.c.o
[1402/2552] Compiling C object libblock.fa.p/block_vhdx-log.c.o
[1403/2552] Compiling C object libcommon.fa.p/ui_clipboard.c.o
[1404/2552] Compiling C object libblock.fa.p/block_replication.c.o
[1405/2552] Compiling C object libhwcore.fa.p/hw_core_resettable.c.o
[1406/2552] Compiling C object libchardev.fa.p/chardev_char-mux.c.o
[1407/2552] Compiling C object libblock.fa.p/block_dirty-bitmap.c.o
[1408/2552] Compiling C object libblock.fa.p/block_block-copy.c.o
[1409/2552] Compiling C object libcommon.fa.p/ui_spice-module.c.o
[1410/2552] Compiling C object libhwcore.fa.p/hw_core_qdev-hotplug.c.o
[1411/2552] Compiling C object libcommon.fa.p/ui_kbd-state.c.o
[1412/2552] Compiling C object libcommon.fa.p/ui_udmabuf.c.o
[1413/2552] Compiling C object libcommon.fa.p/qom_qom-hmp-cmds.c.o
[1414/2552] Compiling C object libcommon.fa.p/ui_input-legacy.c.o
[1415/2552] Compiling C object libcommon.fa.p/cpus-common.c.o
[1416/2552] Compiling C object libqmp.fa.p/monitor_qmp.c.o
[1417/2552] Compiling C object libblock.fa.p/meson-generated_.._block_block-gen.c.o
[1418/2552] Compiling C object libcommon.fa.p/ui_cursor.c.o
[1419/2552] Compiling C object libblock.fa.p/block_vpc.c.o
[1420/2552] Compiling C object libblock.fa.p/block_parallels.c.o
[1421/2552] Compiling C object libcommon.fa.p/hw_core_machine-smp.c.o
[1422/2552] Compiling C object libcommon.fa.p/ui_vnc-palette.c.o
[1423/2552] Compiling C object libhwcore.fa.p/hw_core_qdev-properties.c.o
[1424/2552] Compiling C object libblock.fa.p/block_quorum.c.o
[1425/2552] Compiling C object libcommon.fa.p/hw_core_cpu-common.c.o
[1426/2552] Compiling C object libcommon.fa.p/ui_qemu-pixman.c.o
[1427/2552] Compiling C object libcommon.fa.p/ui_keymaps.c.o
[1428/2552] Compiling C object libcommon.fa.p/ui_vnc-enc-zlib.c.o
[1429/2552] Compiling C object libqmp.fa.p/monitor_monitor.c.o
[1430/2552] Compiling C object libcommon.fa.p/hw_acpi_ghes-stub.c.o
[1431/2552] Compiling C object libblock.fa.p/block_qcow.c.o
[1432/2552] Linking static target libqmp.fa
[1433/2552] Compiling C object libcommon.fa.p/ui_util.c.o
[1434/2552] Compiling C object libcommon.fa.p/hw_acpi_pci.c.o
[1435/2552] Compiling C object libcommon.fa.p/ui_input-linux.c.o
[1436/2552] Compiling C object libcommon.fa.p/ui_vnc-ws.c.o
[1437/2552] Compiling C object libblock.fa.p/block_qcow2-bitmap.c.o
[1438/2552] Compiling C object libcommon.fa.p/hw_acpi_utils.c.o
[1439/2552] Compiling C object libchardev.fa.p/chardev_char.c.o
[1440/2552] Compiling C object libcommon.fa.p/hw_acpi_acpi_interface.c.o
[1441/2552] Compiling C object libcommon.fa.p/ui_input-barrier.c.o
[1442/2552] Compiling C object libcommon.fa.p/ui_vnc-auth-vencrypt.c.o
[1443/2552] Compiling C object libblock.fa.p/qemu-io-cmds.c.o
[1444/2552] Compiling C object libblock.fa.p/nbd_client.c.o
[1445/2552] Compiling C object libcommon.fa.p/ui_vnc-clipboard.c.o
[1446/2552] Compiling C object libcommon.fa.p/hw_acpi_hmat.c.o
[1447/2552] Compiling C object libblock.fa.p/block_vhdx.c.o
[1448/2552] Compiling C object libcommon.fa.p/ui_input.c.o
[1449/2552] Compiling C object libhwcore.fa.p/hw_core_qdev.c.o
[1450/2552] Compiling C object libchardev.fa.p/chardev_char-socket.c.o
[1451/2552] Compiling C object libcommon.fa.p/hw_acpi_tpm.c.o
[1452/2552] Compiling C object libcommon.fa.p/hw_audio_gusemu_mixer.c.o
[1453/2552] Compiling C object libcommon.fa.p/hw_audio_gusemu_hal.c.o
[1454/2552] Compiling C object libcommon.fa.p/hw_core_fw-path-provider.c.o
[1455/2552] Linking static target libhwcore.fa
[1456/2552] Compiling C object libcommon.fa.p/hw_char_parallel-isa.c.o
[1457/2552] Compiling C object libcommon.fa.p/ui_vnc-enc-hextile.c.o
[1458/2552] Compiling C object libcommon.fa.p/hw_audio_adlib.c.o
[1459/2552] Linking static target libchardev.fa
[1460/2552] Compiling C object libcommon.fa.p/hw_acpi_bios-linker-loader.c.o
[1461/2552] Compiling C object libcommon.fa.p/hw_audio_gus.c.o
[1462/2552] Compiling C object libcommon.fa.p/hw_core_clock-vmstate.c.o
[1463/2552] Compiling C object libcommon.fa.p/hw_acpi_ipmi.c.o
[1464/2552] Compiling C object libcommon.fa.p/hw_acpi_vmgenid.c.o
[1465/2552] Compiling C object libblock.fa.p/block_mirror.c.o
[1466/2552] Compiling C object libcommon.fa.p/ui_vnc-jobs.c.o
[1467/2552] Compiling C object libblock.fa.p/block_qed.c.o
[1468/2552] Compiling C object libcommon.fa.p/hw_audio_pcspk.c.o
[1469/2552] Compiling C object libcommon.fa.p/hw_char_debugcon.c.o
[1470/2552] Compiling C object libcommon.fa.p/hw_core_qdev-fw.c.o
[1471/2552] Compiling C object libcommon.fa.p/hw_core_vm-change-state-handler.c.o
[1472/2552] Compiling C object libcommon.fa.p/hw_acpi_cxl.c.o
[1473/2552] Compiling C object libcommon.fa.p/hw_audio_soundhw.c.o
[1474/2552] Compiling C object libcommon.fa.p/hw_block_cdrom.c.o
[1475/2552] Compiling C object libcommon.fa.p/hw_display_edid-region.c.o
[1476/2552] Compiling C object libcommon.fa.p/hw_audio_cs4231a.c.o
[1477/2552] Compiling C object libcommon.fa.p/hw_block_block.c.o
[1478/2552] Compiling C object libcommon.fa.p/hw_char_serial-isa.c.o
[1479/2552] Compiling C object libcommon.fa.p/hw_acpi_viot.c.o
[1480/2552] Compiling C object libcommon.fa.p/hw_core_gpio.c.o
[1481/2552] Compiling C object libcommon.fa.p/hw_cpu_cluster.c.o
[1482/2552] Compiling C object libcommon.fa.p/hw_acpi_generic_event_device.c.o
[1483/2552] Compiling C object libcommon.fa.p/hw_core_cpu-sysemu.c.o
[1484/2552] Compiling C object libcommon.fa.p/hw_acpi_cpu_hotplug.c.o
[1485/2552] Compiling C object libcommon.fa.p/hw_core_nmi.c.o
[1486/2552] Compiling C object libcommon.fa.p/hw_display_i2c-ddc.c.o
[1487/2552] Compiling C object libcommon.fa.p/hw_block_hd-geometry.c.o
[1488/2552] Compiling C object libcommon.fa.p/hw_acpi_core.c.o
[1489/2552] Compiling C object libcommon.fa.p/hw_core_null-machine.c.o
[1490/2552] Compiling C object libblock.fa.p/block_nbd.c.o
[1491/2552] Compiling C object libcommon.fa.p/hw_char_serial-pci-multi.c.o
[1492/2552] Compiling C object libqemuutil.a.p/meson-generated_.._qapi_qapi-visit-block-core.c.o
[1493/2552] Compiling C object libcommon.fa.p/hw_char_virtio-console.c.o
[1494/2552] Compiling C object libcommon.fa.p/hw_char_serial-pci.c.o
[1495/2552] Compiling C object libcommon.fa.p/hw_char_ipoctal232.c.o
[1496/2552] Compiling C object libcommon.fa.p/hw_core_machine-hmp-cmds.c.o
[1497/2552] Compiling C object libcommon.fa.p/hw_acpi_ich9_tco.c.o
[1498/2552] Compiling C object libcommon.fa.p/hw_block_fdc-isa.c.o
[1499/2552] Compiling C object libcommon.fa.p/hw_core_generic-loader.c.o
[1500/2552] Compiling C object libcommon.fa.p/hw_audio_es1370.c.o
[1501/2552] Compiling C object libcommon.fa.p/hw_char_parallel.c.o
[1502/2552] Compiling C object libcommon.fa.p/hw_core_guest-loader.c.o
[1503/2552] Compiling C object libcommon.fa.p/hw_cpu_core.c.o
[1504/2552] Compiling C object libcommon.fa.p/hw_acpi_ich9.c.o
[1505/2552] Compiling C object libblock.fa.p/block_block-backend.c.o
[1506/2552] Compiling C object libcommon.fa.p/hw_i2c_smbus_slave.c.o
[1507/2552] Compiling C object libcommon.fa.p/hw_display_ati_dbg.c.o
[1508/2552] Compiling C object libblock.fa.p/block_file-posix.c.o
[1509/2552] Compiling C object libcommon.fa.p/hw_cxl_cxl-device-utils.c.o
[1510/2552] Compiling C object libcommon.fa.p/hw_display_edid-generate.c.o
[1511/2552] Compiling C object libcommon.fa.p/hw_display_ramfb-standalone.c.o
[1512/2552] Compiling C object libcommon.fa.p/hw_display_cirrus_vga_isa.c.o
[1513/2552] Compiling C object libcommon.fa.p/hw_core_sysbus.c.o
[1514/2552] Compiling C object libcommon.fa.p/hw_display_ramfb.c.o
[1515/2552] Compiling C object libcommon.fa.p/hw_acpi_piix4.c.o
[1516/2552] Compiling C object libcommon.fa.p/hw_display_vga-isa.c.o
[1517/2552] Compiling C object libcommon.fa.p/hw_acpi_cpu.c.o
[1518/2552] Compiling C object libcommon.fa.p/hw_display_acpi-vga.c.o
[1519/2552] Compiling C object libcommon.fa.p/hw_acpi_memory_hotplug.c.o
[1520/2552] Compiling C object libcommon.fa.p/hw_i2c_smbus_master.c.o
[1521/2552] Compiling C object libcommon.fa.p/hw_acpi_pcihp.c.o
[1522/2552] Compiling C object libcommon.fa.p/hw_cxl_cxl-component-utils.c.o
[1523/2552] Compiling C object libblock.fa.p/block_qcow2-cluster.c.o
[1524/2552] Compiling C object libcommon.fa.p/hw_cxl_cxl-cdat.c.o
[1525/2552] Compiling C object libblock.fa.p/block_nvme.c.o
[1526/2552] Compiling C object libcommon.fa.p/hw_cxl_cxl-mailbox-utils.c.o
[1527/2552] Compiling C object libcommon.fa.p/hw_intc_intc.c.o
[1528/2552] Compiling C object libcommon.fa.p/hw_audio_sb16.c.o
[1529/2552] Compiling C object libcommon.fa.p/hw_i2c_bitbang_i2c.c.o
[1530/2552] Compiling C object libcommon.fa.p/hw_cxl_cxl-host.c.o
[1531/2552] Compiling C object libblock.fa.p/block_vmdk.c.o
[1532/2552] Compiling C object libcommon.fa.p/hw_char_serial.c.o
[1533/2552] Compiling C object libcommon.fa.p/hw_audio_ac97.c.o
[1534/2552] Compiling C object libcommon.fa.p/hw_display_vga-pci.c.o
[1535/2552] Compiling C object libcommon.fa.p/hw_acpi_erst.c.o
[1536/2552] Compiling C object libcommon.fa.p/hw_acpi_nvdimm.c.o
[1537/2552] Compiling C object libcommon.fa.p/hw_display_ati_2d.c.o
[1538/2552] Compiling C object libcommon.fa.p/hw_display_bochs-display.c.o
[1539/2552] Compiling C object libcommon.fa.p/hw_audio_fmopl.c.o
[1540/2552] Compiling C object libcommon.fa.p/hw_dma_i8257.c.o
[1541/2552] Compiling C object libcommon.fa.p/hw_i2c_core.c.o
[1542/2552] Compiling C object libcommon.fa.p/hw_audio_hda-codec.c.o
[1543/2552] Compiling C object libcommon.fa.p/hw_ide_isa.c.o
[1544/2552] Compiling C object libcommon.fa.p/hw_ipack_ipack.c.o
[1545/2552] Compiling C object libcommon.fa.p/hw_ide_ich.c.o
[1546/2552] Compiling C object libcommon.fa.p/hw_audio_intel-hda.c.o
[1547/2552] Compiling C object libcommon.fa.p/hw_i2c_pm_smbus.c.o
[1548/2552] Compiling C object libcommon.fa.p/hw_i2c_smbus_eeprom.c.o
[1549/2552] Compiling C object libcommon.fa.p/hw_ide_ioport.c.o
[1550/2552] Compiling C object libcommon.fa.p/hw_i2c_smbus_ich9.c.o
[1551/2552] Compiling C object libcommon.fa.p/hw_core_qdev-properties-system.c.o
[1552/2552] Compiling C object libcommon.fa.p/hw_input_vhost-user-input.c.o
[1553/2552] Compiling C object libcommon.fa.p/hw_ipmi_isa_ipmi_kcs.c.o
[1554/2552] Compiling C object libcommon.fa.p/hw_ipmi_isa_ipmi_bt.c.o
[1555/2552] Compiling C object libcommon.fa.p/hw_input_virtio-input-host.c.o
[1556/2552] Compiling C object libcommon.fa.p/hw_input_hid.c.o
[1557/2552] Compiling C object libcommon.fa.p/hw_ipmi_ipmi.c.o
[1558/2552] Compiling C object libcommon.fa.p/hw_misc_pc-testdev.c.o
[1559/2552] Compiling C object libcommon.fa.p/hw_misc_debugexit.c.o
[1560/2552] Compiling C object libcommon.fa.p/hw_ide_qdev.c.o
[1561/2552] Compiling C object libcommon.fa.p/hw_input_virtio-input-hid.c.o
[1562/2552] Compiling C object libcommon.fa.p/ui_vnc-enc-tight.c.o
[1563/2552] Compiling C object libcommon.fa.p/hw_ide_piix.c.o
[1564/2552] Compiling C object libcommon.fa.p/hw_intc_i8259_common.c.o
[1565/2552] Compiling C object libcommon.fa.p/hw_intc_ioapic_common.c.o
[1566/2552] Compiling C object libcommon.fa.p/hw_ipmi_ipmi_bt.c.o
[1567/2552] Compiling C object libcommon.fa.p/hw_block_pflash_cfi01.c.o
[1568/2552] Compiling C object libcommon.fa.p/hw_ipmi_ipmi_kcs.c.o
[1569/2552] Compiling C object libcommon.fa.p/hw_ipmi_smbus_ipmi.c.o
[1570/2552] Compiling C object libcommon.fa.p/hw_misc_sga.c.o
[1571/2552] Compiling C object libcommon.fa.p/hw_isa_isa-bus.c.o
[1572/2552] Compiling C object libcommon.fa.p/migration_migration.c.o
[1573/2552] Compiling C object libblock.fa.p/block_io.c.o
[1574/2552] Compiling C object libcommon.fa.p/hw_ipmi_pci_ipmi_kcs.c.o
[1575/2552] Compiling C object libcommon.fa.p/hw_isa_apm.c.o
[1576/2552] Compiling C object libcommon.fa.p/hw_misc_applesmc.c.o
[1577/2552] Compiling C object libcommon.fa.p/hw_intc_i8259.c.o
[1578/2552] Compiling C object libcommon.fa.p/hw_mem_nvdimm.c.o
[1579/2552] Compiling C object libcommon.fa.p/hw_ide_pci.c.o
[1580/2552] Compiling C object libcommon.fa.p/hw_misc_vmcoreinfo.c.o
[1581/2552] Compiling C object libcommon.fa.p/hw_misc_pvpanic.c.o
[1582/2552] Compiling C object libcommon.fa.p/hw_input_virtio-input.c.o
[1583/2552] Compiling C object libcommon.fa.p/hw_ipmi_ipmi_bmc_extern.c.o
[1584/2552] Compiling C object libcommon.fa.p/hw_ipmi_pci_ipmi_bt.c.o
[1585/2552] Compiling C object libcommon.fa.p/hw_input_pckbd.c.o
[1586/2552] Compiling C object libcommon.fa.p/hw_misc_pvpanic-isa.c.o
[1587/2552] Compiling C object libcommon.fa.p/hw_block_fdc.c.o
[1588/2552] Compiling C object libcommon.fa.p/hw_net_rocker_rocker_world.c.o
[1589/2552] Compiling C object libcommon.fa.p/hw_ipack_tpci200.c.o
[1590/2552] Compiling C object libcommon.fa.p/hw_misc_pvpanic-pci.c.o
[1591/2552] Compiling C object libcommon.fa.p/hw_net_ne2000-pci.c.o
[1592/2552] Compiling C object libcommon.fa.p/hw_core_machine.c.o
[1593/2552] Compiling C object libcommon.fa.p/hw_misc_pci-testdev.c.o
[1594/2552] Compiling C object libcommon.fa.p/hw_mem_pc-dimm.c.o
[1595/2552] Compiling C object libblockdev.fa.p/blockdev.c.o
[1596/2552] Compiling C object libblock.fa.p/block_qcow2-refcount.c.o
[1597/2552] Compiling C object libcommon.fa.p/hw_display_ati.c.o
[1598/2552] Compiling C object libcommon.fa.p/hw_misc_edu.c.o
[1599/2552] Compiling C object libcommon.fa.p/hw_net_ne2000-isa.c.o
[1600/2552] Linking static target libqemuutil.a
[1601/2552] Compiling C object libcommon.fa.p/hw_nvram_eeprom93xx.c.o
[1602/2552] Compiling C object libcommon.fa.p/hw_isa_piix3.c.o
[1603/2552] Compiling C object libcommon.fa.p/ui_console.c.o
[1604/2552] Compiling C object libcommon.fa.p/hw_net_rocker_rocker_fp.c.o
[1605/2552] Compiling C object libcommon.fa.p/hw_mem_cxl_type3.c.o
[1606/2552] Compiling C object libcommon.fa.p/hw_net_can_can_mioe3680_pci.c.o
[1607/2552] Compiling C object libcommon.fa.p/hw_net_can_can_kvaser_pci.c.o
[1608/2552] Compiling C object libcommon.fa.p/hw_net_can_can_pcm3680_pci.c.o
[1609/2552] Compiling C object libcommon.fa.p/hw_mem_memory-device.c.o
[1610/2552] Compiling C object libcommon.fa.p/hw_net_ne2000.c.o
[1611/2552] Compiling C object libcommon.fa.p/hw_net_can_ctucan_core.c.o
[1612/2552] Compiling C object libcommon.fa.p/hw_pci_pcie_host.c.o
[1613/2552] Compiling C object libcommon.fa.p/hw_pci_slotid_cap.c.o
[1614/2552] Compiling C object libcommon.fa.p/hw_net_rocker_rocker_desc.c.o
[1615/2552] Compiling C object libcommon.fa.p/hw_input_ps2.c.o
[1616/2552] Compiling C object libcommon.fa.p/hw_display_vmware_vga.c.o
[1617/2552] Compiling C object libcommon.fa.p/hw_net_vhost_net.c.o
[1618/2552] Compiling C object libcommon.fa.p/hw_nvme_subsys.c.o
[1619/2552] Compiling C object libcommon.fa.p/hw_pci-host_pam.c.o
[1620/2552] Compiling C object libcommon.fa.p/hw_scsi_emulation.c.o
[1621/2552] Compiling C object libcommon.fa.p/hw_net_can_ctucan_pci.c.o
[1622/2552] Compiling C object libcommon.fa.p/hw_net_pcnet-pci.c.o
[1623/2552] Compiling C object libcommon.fa.p/hw_pci_pci-hmp-cmds.c.o
[1624/2552] Compiling C object libcommon.fa.p/hw_net_can_can_sja1000.c.o
[1625/2552] Compiling C object libblockdev.fa.p/nbd_server.c.o
[1626/2552] Compiling C object libcommon.fa.p/hw_ide_atapi.c.o
[1627/2552] Compiling C object libcommon.fa.p/hw_net_net_tx_pkt.c.o
[1628/2552] Compiling C object libcommon.fa.p/hw_pci_pcie_port.c.o
[1629/2552] Compiling C object libcommon.fa.p/hw_sd_sdmmc-internal.c.o
[1630/2552] Compiling C object libcommon.fa.p/hw_pci_pci-qmp-cmds.c.o
[1631/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_ioh3420.c.o
[1632/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_i82801b11.c.o
[1633/2552] Compiling C object libcommon.fa.p/hw_net_e1000x_common.c.o
[1634/2552] Linking static target libblockdev.fa
[1635/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_gen_pcie_root_port.c.o
[1636/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_pcie_pci_bridge.c.o
[1637/2552] Compiling C object libcommon.fa.p/hw_misc_ivshmem.c.o
[1638/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_pci_bridge_dev.c.o
[1639/2552] Compiling C object libcommon.fa.p/hw_pci_pcie_sriov.c.o
[1640/2552] Compiling C object libcommon.fa.p/hw_pci_msi.c.o
[1641/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_pcie_root_port.c.o
[1642/2552] Compiling C object libcommon.fa.p/hw_pci_pci_bridge.c.o
[1643/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_xio3130_downstream.c.o
[1644/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_xio3130_upstream.c.o
[1645/2552] Compiling C object libcommon.fa.p/hw_pci-host_remote.c.o
[1646/2552] Compiling C object libcommon.fa.p/hw_smbios_smbios_type_38.c.o
[1647/2552] Compiling C object libcommon.fa.p/hw_pci-host_gpex.c.o
[1648/2552] Compiling C object libcommon.fa.p/hw_net_e1000e.c.o
[1649/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_cxl_root_port.c.o
[1650/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_cxl_downstream.c.o
[1651/2552] Compiling C object libcommon.fa.p/hw_usb_desc-msos.c.o
[1652/2552] Compiling C object libcommon.fa.p/hw_timer_i8254_common.c.o
[1653/2552] Compiling C object libcommon.fa.p/hw_tpm_tpm_tis_isa.c.o
[1654/2552] Compiling C object libcommon.fa.p/hw_usb_pcap.c.o
[1655/2552] Compiling C object libcommon.fa.p/hw_nvme_ns.c.o
[1656/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_cxl_upstream.c.o
[1657/2552] Compiling C object libcommon.fa.p/hw_sd_sdhci-pci.c.o
[1658/2552] Compiling C object libcommon.fa.p/hw_sd_core.c.o
[1659/2552] Compiling C object libcommon.fa.p/hw_usb_combined-packet.c.o
[1660/2552] Compiling C object libcommon.fa.p/hw_timer_i8254.c.o
[1661/2552] Compiling C object libcommon.fa.p/hw_tpm_tpm_crb.c.o
[1662/2552] Compiling C object libcommon.fa.p/hw_usb_imx-usb-phy.c.o
[1663/2552] Compiling C object libcommon.fa.p/hw_usb_libhw.c.o
[1664/2552] Compiling C object libcommon.fa.p/hw_pci_msix.c.o
[1665/2552] Compiling C object libcommon.fa.p/hw_scsi_mptendian.c.o
[1666/2552] Compiling C object libcommon.fa.p/hw_net_eepro100.c.o
[1667/2552] Compiling C object libcommon.fa.p/hw_pci_shpc.c.o
[1668/2552] Compiling C object libcommon.fa.p/hw_net_net_rx_pkt.c.o
[1669/2552] Compiling C object libcommon.fa.p/hw_pci-bridge_pci_expander_bridge.c.o
[1670/2552] Compiling C object libcommon.fa.p/hw_acpi_aml-build.c.o
[1671/2552] Compiling C object libcommon.fa.p/hw_pci-host_gpex-acpi.c.o
[1672/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-xhci-nec.c.o
[1673/2552] Compiling C object libblock.fa.p/block_qcow2.c.o
[1674/2552] Compiling C object libcommon.fa.p/hw_pci-host_i440fx.c.o
[1675/2552] Compiling C object libcommon.fa.p/hw_usb_dev-storage-bot.c.o
[1676/2552] Compiling C object libcommon.fa.p/hw_usb_dev-hid.c.o
[1677/2552] Compiling C object libcommon.fa.p/hw_net_tulip.c.o
[1678/2552] Compiling C object libcommon.fa.p/hw_watchdog_watchdog.c.o
[1679/2552] Compiling C object libcommon.fa.p/hw_usb_dev-wacom.c.o
[1680/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-ehci-pci.c.o
[1681/2552] Compiling C object libcommon.fa.p/hw_core_loader.c.o
[1682/2552] Compiling C object libcommon.fa.p/hw_nvme_dif.c.o
[1683/2552] Compiling C object libcommon.fa.p/hw_net_rocker_rocker_of_dpa.c.o
[1684/2552] Compiling C object libcommon.fa.p/gdbstub_softmmu.c.o
[1685/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-xhci-sysbus.c.o
[1686/2552] Compiling C object libcommon.fa.p/hw_net_pcnet.c.o
[1687/2552] Compiling C object libcommon.fa.p/hw_pci-host_q35.c.o
[1688/2552] Compiling C object libcommon.fa.p/hw_ipmi_ipmi_bmc_sim.c.o
[1689/2552] Compiling C object libcommon.fa.p/hw_watchdog_wdt_ib700.c.o
[1690/2552] Compiling C object libcommon.fa.p/hw_pci_pcie.c.o
[1691/2552] Compiling C object libcommon.fa.p/fsdev_qemu-fsdev-opts.c.o
[1692/2552] Compiling C object libcommon.fa.p/hw_usb_u2f.c.o
[1693/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-ohci-pci.c.o
[1694/2552] Compiling C object libcommon.fa.p/hw_usb_dev-storage-classic.c.o
[1695/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-xhci-pci.c.o
[1696/2552] Compiling C object libcommon.fa.p/hw_usb_dev-audio.c.o
[1697/2552] Compiling C object libcommon.fa.p/hw_scsi_esp-pci.c.o
[1698/2552] Compiling C object libcommon.fa.p/hw_timer_hpet.c.o
[1699/2552] Compiling C object libcommon.fa.p/hw_usb_u2f-passthru.c.o
[1700/2552] Compiling C object libcommon.fa.p/audio_wavcapture.c.o
[1701/2552] Generating qemu.syms with a custom command (wrapped by meson to capture output)
[1702/2552] Compiling C object libcommon.fa.p/hw_virtio_virtio-bus.c.o
[1703/2552] Compiling C object libcommon.fa.p/chardev_testdev.c.o
[1704/2552] Compiling C object libblock.fa.p/block_vvfat.c.o
[1705/2552] Compiling C object libcommon.fa.p/hw_usb_core.c.o
[1706/2552] Compiling C object libcommon.fa.p/hw_ide_ahci.c.o
[1707/2552] Compiling C object libcommon.fa.p/hw_nvram_fw_cfg.c.o
[1708/2552] Compiling C object libcommon.fa.p/hw_scsi_mptconfig.c.o
[1709/2552] Compiling C object libcommon.fa.p/hw_tpm_tpm_tis_common.c.o
[1710/2552] Compiling C object libcommon.fa.p/hw_usb_dev-hub.c.o
[1711/2552] Compiling C object libcommon.fa.p/ui_vnc-enc-zrle.c.o
[1712/2552] Compiling C object libcommon.fa.p/hw_net_rocker_rocker.c.o
[1713/2552] Compiling C object libcommon.fa.p/hw_remote_iommu.c.o
[1714/2552] Generating block.syms with a custom command (wrapped by meson to capture output)
[1715/2552] Compiling C object libcommon.fa.p/hw_usb_dev-network.c.o
[1716/2552] Compiling C object libcommon.fa.p/fsdev_qemu-fsdev-dummy.c.o
[1717/2552] Compiling C object libcommon.fa.p/hw_scsi_scsi-generic.c.o
[1718/2552] Compiling C object libcommon.fa.p/softmmu_runstate-action.c.o
[1719/2552] Compiling C object libcommon.fa.p/chardev_msmouse.c.o
[1720/2552] Compiling C object libcommon.fa.p/softmmu_balloon.c.o
[1721/2552] Compiling C object libcommon.fa.p/hw_usb_dev-storage.c.o
[1722/2552] Compiling C object libcommon.fa.p/hw_watchdog_wdt_i6300esb.c.o
[1723/2552] Compiling C object libcommon.fa.p/hw_usb_desc.c.o
[1724/2552] Compiling C object libcommon.fa.p/fsdev_qemu-fsdev-throttle.c.o
[1725/2552] Compiling C object libcommon.fa.p/block_block-ram-registrar.c.o
[1726/2552] Compiling C object libcommon.fa.p/softmmu_datadir.c.o
[1727/2552] Compiling C object libcommon.fa.p/hw_remote_message.c.o
[1728/2552] Compiling C object libcommon.fa.p/softmmu_cpu-throttle.c.o
[1729/2552] Compiling C object libcommon.fa.p/hw_net_rtl8139.c.o
[1730/2552] Compiling C object libcommon.fa.p/hw_remote_remote-obj.c.o
[1731/2552] Compiling C object libcommon.fa.p/hw_remote_machine.c.o
[1732/2552] Compiling C object libcommon.fa.p/dump_dump-hmp-cmds.c.o
[1733/2552] Compiling C object libcommon.fa.p/hw_usb_dev-serial.c.o
[1734/2552] Compiling C object libcommon.fa.p/audio_noaudio.c.o
[1735/2552] Compiling C object libcommon.fa.p/hw_remote_iohub.c.o
[1736/2552] Compiling C object libcommon.fa.p/hw_remote_mpqemu-link.c.o
[1737/2552] Compiling C object libcommon.fa.p/chardev_wctablet.c.o
[1738/2552] Compiling C object libcommon.fa.p/hw_usb_bus.c.o
[1739/2552] Compiling C object libcommon.fa.p/backends_confidential-guest-support.c.o
[1740/2552] Compiling C object libcommon.fa.p/block_blkreplay.c.o
[1741/2552] Compiling C object libcommon.fa.p/softmmu_globals.c.o
[1742/2552] Compiling C object libcommon.fa.p/hw_net_e1000.c.o
[1743/2552] Compiling C object libcommon.fa.p/softmmu_tpm.c.o
[1744/2552] Compiling C object libcommon.fa.p/softmmu_cpu-timers.c.o
[1745/2552] Compiling C object libcommon.fa.p/hw_usb_dev-smartcard-reader.c.o
[1746/2552] Compiling C object libcommon.fa.p/softmmu_rtc.c.o
[1747/2552] Compiling C object libcommon.fa.p/backends_rng-builtin.c.o
[1748/2552] Compiling C object libcommon.fa.p/ui_vnc.c.o
[1749/2552] Compiling C object libcommon.fa.p/audio_wavaudio.c.o
[1750/2552] Compiling C object libcommon.fa.p/hw_remote_proxy.c.o
[1751/2552] Compiling C object libcommon.fa.p/backends_hostmem-ram.c.o
[1752/2552] Compiling C object libcommon.fa.p/backends_rng-random.c.o
[1753/2552] Compiling C object libcommon.fa.p/hw_net_vmxnet3.c.o
[1754/2552] Compiling C object libcommon.fa.p/backends_rng-egd.c.o
[1755/2552] Compiling C object libcommon.fa.p/block_qapi-sysemu.c.o
[1756/2552] Compiling C object libcommon.fa.p/softmmu_memory_mapping.c.o
[1757/2552] Compiling C object libcommon.fa.p/backends_rng.c.o
[1758/2552] Compiling C object libcommon.fa.p/backends_cryptodev-builtin.c.o
[1759/2552] Compiling C object libcommon.fa.p/hw_virtio_virtio-mmio.c.o
[1760/2552] Compiling C object libcommon.fa.p/hw_scsi_mptsas.c.o
[1761/2552] Compiling C object libcommon.fa.p/hw_ide_core.c.o
[1762/2552] Compiling C object libcommon.fa.p/hw_usb_dev-uas.c.o
[1763/2552] Compiling C object libcommon.fa.p/backends_hostmem-memfd.c.o
[1764/2552] Compiling C object libcommon.fa.p/hw_sd_sd.c.o
[1765/2552] Compiling C object libcommon.fa.p/audio_audio_legacy.c.o
[1766/2552] Compiling C object libcommon.fa.p/softmmu_bootdevice.c.o
[1767/2552] Compiling C object libcommon.fa.p/migration_channel.c.o
[1768/2552] Compiling C object libcommon.fa.p/hw_scsi_esp.c.o
[1769/2552] Compiling C object libcommon.fa.p/backends_cryptodev.c.o
[1770/2552] Compiling C object libcommon.fa.p/backends_hostmem-file.c.o
[1771/2552] Compiling C object libcommon.fa.p/migration_exec.c.o
[1772/2552] Compiling C object libcommon.fa.p/net_checksum.c.o
[1773/2552] Compiling C object libcommon.fa.p/migration_colo-failover.c.o
[1774/2552] Compiling C object libcommon.fa.p/net_util.c.o
[1775/2552] Compiling C object libcommon.fa.p/hw_scsi_vmw_pvscsi.c.o
[1776/2552] Compiling C object libcommon.fa.p/backends_vhost-user.c.o
[1777/2552] Compiling C object libcommon.fa.p/softmmu_dma-helpers.c.o
[1778/2552] Compiling C object libcommon.fa.p/net_filter-buffer.c.o
[1779/2552] Compiling C object libcommon.fa.p/migration_global_state.c.o
[1780/2552] Compiling C object libcommon.fa.p/migration_fd.c.o
[1781/2552] Compiling C object libcommon.fa.p/migration_multifd-zlib.c.o
[1782/2552] Compiling C object libblock.fa.p/block.c.o
[1783/2552] Compiling C object libcommon.fa.p/hw_scsi_scsi-bus.c.o
[1784/2552] Compiling C object libcommon.fa.p/net_announce.c.o
[1785/2552] Compiling C object libcommon.fa.p/net_dump.c.o
[1786/2552] Compiling C object libcommon.fa.p/net_colo.c.o
[1787/2552] Compiling C object libcommon.fa.p/migration_socket.c.o
[1788/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-uhci.c.o
[1789/2552] Compiling C object libcommon.fa.p/audio_mixeng.c.o
[1790/2552] Compiling C object libcommon.fa.p/backends_hostmem.c.o
[1791/2552] Compiling C object libcommon.fa.p/ebpf_ebpf_rss-stub.c.o
[1792/2552] Compiling C object libcommon.fa.p/net_filter-replay.c.o
[1793/2552] Compiling C object libcommon.fa.p/migration_tls.c.o
[1794/2552] Compiling C object libcommon.fa.p/replay_replay-time.c.o
[1795/2552] Compiling C object libcommon.fa.p/replay_replay-random.c.o
[1796/2552] Compiling C object libcommon.fa.p/block_monitor_block-hmp-cmds.c.o
[1797/2552] Compiling C object libcommon.fa.p/net_queue.c.o
[1798/2552] Compiling C object libcommon.fa.p/net_filter-rewriter.c.o
[1799/2552] Compiling C object libcommon.fa.p/net_tap-linux.c.o
[1800/2552] Compiling C object libcommon.fa.p/replay_replay-input.c.o
[1801/2552] Compiling C object libcommon.fa.p/softmmu_runstate.c.o
[1802/2552] Compiling C object libcommon.fa.p/net_filter-mirror.c.o
[1803/2552] Compiling C object libcommon.fa.p/hw_smbios_smbios.c.o
[1804/2552] Compiling C object libcommon.fa.p/net_eth.c.o
[1805/2552] Compiling C object libcommon.fa.p/softmmu_cpus.c.o
[1806/2552] Compiling C object libcommon.fa.p/net_can_can_core.c.o
[1807/2552] Compiling C object libcommon.fa.p/net_can_can_host.c.o
[1808/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-emit-events.c.o
[1809/2552] Compiling C object libcommon.fa.p/softmmu_device_tree.c.o
[1810/2552] Compiling C object libcommon.fa.p/replay_replay-char.c.o
[1811/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-events.c.o
[1812/2552] Compiling C object libcommon.fa.p/replay_replay-audio.c.o
[1813/2552] Compiling C object libcommon.fa.p/replay_replay-net.c.o
[1814/2552] Compiling C object libcommon.fa.p/net_can_can_socketcan.c.o
[1815/2552] Compiling C object libcommon.fa.p/net_hub.c.o
[1816/2552] Compiling C object libcommon.fa.p/hw_pci_pci.c.o
[1817/2552] Linking static target libblock.fa
[1818/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-events-misc-target.c.o
[1819/2552] Compiling C object libcommon.fa.p/replay_replay.c.o
[1820/2552] Compiling C object libcommon.fa.p/net_stream.c.o
[1821/2552] Compiling C object libcommon.fa.p/replay_replay-snapshot.c.o
[1822/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._x86_64-softmmu-gdbstub-xml.c.o
[1823/2552] Compiling C object libcommon.fa.p/replay_replay-events.c.o
[1824/2552] Compiling C object libcommon.fa.p/net_l2tpv3.c.o
[1825/2552] Compiling C object libcommon.fa.p/net_vhost-user.c.o
[1826/2552] Compiling C object libcommon.fa.p/softmmu_qdev-monitor.c.o
[1827/2552] Compiling C object libcommon.fa.p/migration_vmstate.c.o
[1828/2552] Compiling C object libcommon.fa.p/replay_replay-internal.c.o
[1829/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-events-machine-target.c.o
[1830/2552] Compiling C object libcommon.fa.p/migration_qemu-file.c.o
[1831/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-types-machine-target.c.o
[1832/2552] Compiling C object libcommon.fa.p/net_filter.c.o
[1833/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-types-misc-target.c.o
[1834/2552] Compiling C object libcommon.fa.p/hw_sd_sdhci.c.o
[1835/2552] Compiling C object libcommon.fa.p/net_dgram.c.o
[1836/2552] Compiling C object libcommon.fa.p/replay_replay-debugging.c.o
[1837/2552] Compiling C object libcommon.fa.p/accel_accel-softmmu.c.o
[1838/2552] Compiling C object libcommon.fa.p/migration_colo.c.o
[1839/2552] Compiling C object libcommon.fa.p/hw_display_vhost-user-gpu-pci.c.o
[1840/2552] Compiling C object libcommon.fa.p/monitor_qmp-cmds.c.o
[1841/2552] Compiling C object libcommon.fa.p/hw_display_virtio-gpu-base.c.o
[1842/2552] Compiling C object libcommon.fa.p/hw_display_virtio-gpu-udmabuf.c.o
[1843/2552] Compiling C object libcommon.fa.p/hw_display_virtio-gpu-pci.c.o
[1844/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-init-commands.c.o
[1845/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-commands-machine-target.c.o
[1846/2552] Compiling C object libcommon.fa.p/net_socket.c.o
[1847/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-visit-machine-target.c.o
[1848/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-events.c.o
[1849/2552] Compiling C object libcommon.fa.p/hw_display_vhost-user-vga.c.o
[1850/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-visit-misc-target.c.o
[1851/2552] Compiling C object libcommon.fa.p/migration_block-dirty-bitmap.c.o
[1852/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_arch_memory_mapping.c.o
[1853/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-commands.c.o
[1854/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-types.c.o
[1855/2552] Compiling C object libcommon.fa.p/hw_virtio_virtio-pci.c.o
[1856/2552] Compiling C object libcommon.fa.p/net_vhost-vdpa.c.o
[1857/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_arch_dump.c.o
[1858/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-visit.c.o
[1859/2552] Compiling C object libcommon.fa.p/hw_display_virtio-vga.c.o
[1860/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-ohci.c.o
[1861/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_tcg-cpu.c.o
[1862/2552] Compiling C object libcommon.fa.p/hw_display_vhost-user-gpu.c.o
[1863/2552] Compiling C object libcommon.fa.p/migration_block.c.o
[1864/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-commands-misc-target.c.o
[1865/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_e820_memory_layout.c.o
[1866/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_cpu-sysemu.c.o
[1867/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_kvm_hyperv.c.o
[1868/2552] Compiling C object libcommon.fa.p/hw_scsi_scsi-disk.c.o
[1869/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-init-commands.c.o
[1870/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-qmp.c.o
[1871/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_fpu_helper.c.o
[1872/2552] Compiling C object libcommon.fa.p/audio_ossaudio.c.o
[1873/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_smm_helper.c.o
[1874/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_kvm_kvm-cpu.c.o
[1875/2552] Compiling C object libcommon.fa.p/net_tap.c.o
[1876/2552] Compiling C object libcommon.fa.p/hw_usb_dev-mtp.c.o
[1877/2552] Compiling C object libcommon.fa.p/monitor_hmp.c.o
[1878/2552] Compiling C object libcommon.fa.p/net_colo-compare.c.o
[1879/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_bpt_helper.c.o
[1880/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_seg_helper.c.o
[1881/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_misc_helper.c.o
[1882/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_excp_helper.c.o
[1883/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_fw_cfg.c.o
[1884/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_kvm_i8259.c.o
[1885/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_kvm_clock.c.o
[1886/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_bpt_helper.c.o
[1887/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_block_virtio-blk-common.c.o
[1888/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_vmmouse.c.o
[1889/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_generic_event_device_x86.c.o
[1890/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_host-cpu.c.o
[1891/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_multiboot.c.o
[1892/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_machine.c.o
[1893/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_tcg-cpu.c.o
[1894/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_excp_helper.c.o
[1895/2552] Compiling C object libcommon.fa.p/net_net.c.o
[1896/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_monitor.c.o
[1897/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_xsave_helper.c.o
[1898/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_pc_sysfw_ovmf.c.o
[1899/2552] Compiling C object libcommon.fa.p/migration_postcopy-ram.c.o
[1900/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_x86-iommu.c.o
[1901/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_mpx_helper.c.o
[1902/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_misc_helper.c.o
[1903/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_acpi-common.c.o
[1904/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_gdbstub.c.o
[1905/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/meson-generated_.._qapi_qapi-introspect.c.o
[1906/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_kvm_i8254.c.o
[1907/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_mem_helper.c.o
[1908/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_vmport.c.o
[1909/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_sgx-epc.c.o
[1910/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_acpi-microvm.c.o
[1911/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_port92.c.o
[1912/2552] Compiling C object libcommon.fa.p/hw_scsi_lsi53c895a.c.o
[1913/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_sysemu_svm_helper.c.o
[1914/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/trace_control-target.c.o
[1915/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_microvm-dt.c.o
[1916/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_kvm_ioapic.c.o
[1917/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_tpm_tpm_ppi.c.o
[1918/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_hyperv_hyperv_testdev.c.o
[1919/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-commands.c.o
[1920/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_kvm_apic.c.o
[1921/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_kvmvapic.c.o
[1922/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_cpu-dump.c.o
[1923/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-iova-tree.c.o
[1924/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_sgx.c.o
[1925/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_pc_sysfw.c.o
[1926/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_int_helper.c.o
[1927/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_core_machine-qmp-cmds.c.o
[1928/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-vsock.c.o
[1929/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-config-io.c.o
[1930/2552] Compiling C object libcommon.fa.p/audio_audio.c.o
[1931/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-ehci.c.o
[1932/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_block_dataplane_virtio-blk.c.o
[1933/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_scsi_vhost-user-scsi.c.o
[1934/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_scsi_vhost-scsi-common.c.o
[1935/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_microvm.c.o
[1936/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_hyperv_syndbg.c.o
[1937/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_block_vhost-user-blk.c.o
[1938/2552] Compiling C object libcommon.fa.p/monitor_hmp-cmds.c.o
[1939/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_helper.c.o
[1940/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_scsi_virtio-scsi-dataplane.c.o
[1941/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_spapr.c.o
[1942/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_cc_helper.c.o
[1943/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-vsock-common.c.o
[1944/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_scsi_vhost-scsi.c.o
[1945/2552] Compiling C object libcommon.fa.p/hw_display_virtio-gpu.c.o
[1946/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_hyperv_hyperv.c.o
[1947/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_sev.c.o
[1948/2552] Compiling C object libcommon.fa.p/hw_scsi_megasas.c.o
[1949/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-accel-ops-icount.c.o
[1950/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-backend.c.o
[1951/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_core_numa.c.o
[1952/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-vsock.c.o
[1953/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-i2c.c.o
[1954/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-shadow-virtqueue.c.o
[1955/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-fs.c.o
[1956/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-pmem.c.o
[1957/2552] Compiling C object qga/qemu-ga.p/commands.c.o
[1958/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-visit.c.o
[1959/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_intc_apic_common.c.o
[1960/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-accel-ops-mttcg.c.o
[1961/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-rng.c.o
[1962/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_rtc_mc146818rtc.c.o
[1963/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_igd.c.o
[1964/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_intc_ioapic.c.o
[1965/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-rng.c.o
[1966/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-vsock-pci.c.o
[1967/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_arch_init.c.o
[1968/2552] Compiling C object libcommon.fa.p/hw_net_e1000e_core.c.o
[1969/2552] Compiling C object libcommon.fa.p/hw_usb_hcd-xhci.c.o
[1970/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-gpio-pci.c.o
[1971/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vdpa-dev.c.o
[1972/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_x86.c.o
[1973/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-vsock-pci.c.o
[1974/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_isa_lpc_ich9.c.o
[1975/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-gpio.c.o
[1976/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost.c.o
[1977/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-input-pci.c.o
[1978/2552] Compiling C object libcommon.fa.p/softmmu_vl.c.o
[1979/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_display.c.o
[1980/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-input-host-pci.c.o
[1981/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_char_virtio-serial-bus.c.o
[1982/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/migration_target.c.o
[1983/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-crypto-pci.c.o
[1984/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-i2c-pci.c.o
[1985/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-crypto.c.o
[1986/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-blk-pci.c.o
[1987/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-rng-pci.c.o
[1988/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-scsi-pci.c.o
[1989/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-fs-pci.c.o
[1990/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-input-pci.c.o
[1991/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/page-vary.c.o
[1992/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_intc_apic.c.o
[1993/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-balloon-pci.c.o
[1994/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-rng-pci.c.o
[1995/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_pc_q35.c.o
[1996/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-serial-pci.c.o
[1997/2552] Compiling C object libcommon.fa.p/migration_savevm.c.o
[1998/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_block_virtio-blk.c.o
[1999/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-blk-pci.c.o
[2000/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user-scsi-pci.c.o
[2001/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-net-pci.c.o
[2002/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-scsi-pci.c.o
[2003/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_migration.c.o
[2004/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-pmem-pci.c.o
[2005/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vdpa-dev-pci.c.o
[2006/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_tcg-common.c.o
[2007/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_pc.c.o
[2008/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_remote_memory.c.o
[2009/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-mem-pci.c.o
[2010/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-iommu-pci.c.o
[2011/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_amd_iommu.c.o
[2012/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_remote_proxy-memory-listener.c.o
[2013/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_stubs_hax-stub.c.o
[2014/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-balloon.c.o
[2015/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_scsi_virtio-scsi.c.o
[2016/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_stubs_xen-stub.c.o
[2017/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_accel-common.c.o
[2018/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_ioport.c.o
[2019/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_icount.c.o
[2020/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_cpu-exec-common.c.o
[2021/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_pc_piix.c.o
[2022/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/cpu.c.o
[2023/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/disas.c.o
[2024/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_dummy-cpus.c.o
[2025/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-emit-events.c.o
[2026/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_qtest_qtest.c.o
[2027/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_kvm_kvm-accel-ops.c.o
[2028/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_region.c.o
[2029/2552] Compiling C object qga/qemu-ga.p/cutils.c.o
[2030/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/dump_win_dump.c.o
[2031/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-runtime.c.o
[2032/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-introspect.c.o
[2033/2552] Compiling C object qemu-system-x86_64.p/softmmu_main.c.o
[2034/2552] Compiling C object qga/qemu-ga.p/channel-posix.c.o
[2035/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_hmp.c.o
[2036/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-emit-events.c.o
[2037/2552] Compiling C object qga/qemu-ga.p/guest-agent-command-state.c.o
[2038/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_dirtylimit.c.o
[2039/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-all.c.o
[2040/2552] Compiling C object qga/qemu-ga.p/commands-posix-ssh.c.o
[2041/2552] Compiling C object qemu-edid.p/qemu-edid.c.o
[2042/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-vdpa.c.o
[2043/2552] Compiling C object qga/qemu-ga.p/meson-generated_.._qga-qapi-types.c.o
[2044/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_pci-quirks.c.o
[2045/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-commands-sub-sub-module.c.o
[2046/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-types.c.o
[2047/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-mem.c.o
[2048/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-accel-ops.c.o
[2049/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-commands.c.o
[2050/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-visit.c.o
[2051/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._include_test-qapi-visit-sub-module.c.o
[2052/2552] Compiling C object tests/bench/benchmark-crypto-hmac.p/benchmark-crypto-hmac.c.o
[2053/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-accel-ops-rr.c.o
[2054/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-emit-events.c.o
[2055/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_translate-all.c.o
[2056/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-events.c.o
[2057/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/plugins_api.c.o
[2058/2552] Compiling C object contrib/ivshmem-server/ivshmem-server.p/main.c.o
[2059/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_hyperv_vmbus.c.o
[2060/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-introspect.c.o
[2061/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_acpi-build.c.o
[2062/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-types-sub-sub-module.c.o
[2063/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-events-sub-sub-module.c.o
[2064/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._include_test-qapi-events-sub-module.c.o
[2065/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._include_test-qapi-commands-sub-module.c.o
[2066/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._include_test-qapi-types-sub-module.c.o
[2067/2552] Compiling C object contrib/ivshmem-client/ivshmem-client.p/main.c.o
[2068/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio-iommu.c.o
[2069/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._qapi-builtin-types.c.o
[2070/2552] Compiling C object qga/qemu-ga.p/commands-linux.c.o
[2071/2552] Compiling C object qemu-bridge-helper.p/qemu-bridge-helper.c.o
[2072/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tb-maint.c.o
[2073/2552] Compiling C object qemu-pr-helper.p/scsi_utils.c.o
[2074/2552] Compiling C object qemu-edid.p/hw_display_edid-generate.c.o
[2075/2552] Compiling C object tests/bench/benchmark-crypto-akcipher.p/benchmark-crypto-akcipher.c.o
[2076/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-init-commands.c.o
[2077/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-visit-sub-sub-module.c.o
[2078/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/plugins_loader.c.o
[2079/2552] Compiling C object tests/bench/benchmark-crypto-hash.p/benchmark-crypto-hash.c.o
[2080/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_qtest.c.o
[2081/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/migration_dirtyrate.c.o
[2082/2552] Compiling C object contrib/ivshmem-client/ivshmem-client.p/ivshmem-client.c.o
[2083/2552] Compiling C object contrib/vhost-user-blk/vhost-user-blk.p/vhost-user-blk.c.o
[2084/2552] Compiling C object contrib/vhost-user-input/vhost-user-input.p/main.c.o
[2085/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-init-commands.c.o
[2086/2552] Compiling C object tests/unit/check-qnull.p/check-qnull.c.o
[2087/2552] Compiling C object contrib/ivshmem-server/ivshmem-server.p/ivshmem-server.c.o
[2088/2552] Compiling C object qemu-io.p/qemu-io.c.o
[2089/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-events.c.o
[2090/2552] Compiling C object tests/unit/check-qlist.p/check-qlist.c.o
[2091/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_display_vga.c.o
[2092/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/qemu-storage-daemon.c.o
[2093/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._qapi-builtin-visit.c.o
[2094/2552] Compiling C object storage-daemon/qemu-storage-daemon.p/meson-generated_.._qapi_qapi-introspect.c.o
[2095/2552] Compiling C object tests/fp/fp-test-log2.p/fp-test-log2.c.o
[2096/2552] Compiling C object tests/unit/check-qom-interface.p/check-qom-interface.c.o
[2097/2552] Compiling C object tests/bench/benchmark-crypto-cipher.p/benchmark-crypto-cipher.c.o
[2098/2552] Linking target contrib/ivshmem-client/ivshmem-client
[2099/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_cpu.c.o
[2100/2552] Compiling C object tests/unit/check-qstring.p/check-qstring.c.o
[2101/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-types.c.o
[2102/2552] Compiling C object qemu-pr-helper.p/scsi_qemu-pr-helper.c.o
[2103/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_cpu-exec.c.o
[2104/2552] Compiling C object tests/bench/qht-bench.p/qht-bench.c.o
[2105/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-commands.c.o
[2106/2552] Compiling C object tests/unit/test-uuid.p/test-uuid.c.o
[2107/2552] Compiling C object tests/unit/check-qnum.p/check-qnum.c.o
[2108/2552] Compiling C object tests/qtest/usb-hcd-uhci-test.p/usb-hcd-uhci-test.c.o
[2109/2552] Compiling C object tests/unit/test-bitmap.p/test-bitmap.c.o
[2110/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_translator.c.o
[2111/2552] Compiling C object tests/vhost-user-bridge.p/vhost-user-bridge.c.o
[2112/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_vhost-user.c.o
[2113/2552] Compiling C object tests/unit/check-qlit.p/check-qlit.c.o
[2114/2552] Linking target qemu-bridge-helper
[2115/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/plugins_core.c.o
[2116/2552] Compiling C object qga/qemu-ga.p/main.c.o
[2117/2552] Compiling C object tests/unit/check-qdict.p/check-qdict.c.o
[2118/2552] Linking target qemu-edid
[2119/2552] Compiling C object tests/unit/test-clone-visitor.p/test-clone-visitor.c.o
[2120/2552] Linking target tests/bench/benchmark-crypto-hmac
[2121/2552] Compiling C object tests/unit/test-shift128.p/test-shift128.c.o
[2122/2552] Compiling C object tests/unit/test-div128.p/test-div128.c.o
[2123/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_common.c.o
[2124/2552] Compiling C object tests/unit/test-mul64.p/test-mul64.c.o
[2125/2552] Compiling C object tests/unit/check-qobject.p/check-qobject.c.o
[2126/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_tcg-op-vec.c.o
[2127/2552] Compiling C object tests/unit/test-forward-visitor.p/test-forward-visitor.c.o
[2128/2552] Compiling C object qemu-nbd.p/qemu-nbd.c.o
[2129/2552] Compiling C object tests/unit/test-x86-cpuid.p/test-x86-cpuid.c.o
[2130/2552] Linking target contrib/vhost-user-blk/vhost-user-blk
[2131/2552] Linking target tests/unit/check-qnull
[2132/2552] Linking target tests/bench/benchmark-crypto-hash
[2133/2552] Compiling C object tests/unit/test-bitcnt.p/test-bitcnt.c.o
[2134/2552] Linking target contrib/ivshmem-server/ivshmem-server
[2135/2552] Compiling C object tests/unit/test-string-output-visitor.p/test-string-output-visitor.c.o
[2136/2552] Linking target tests/bench/benchmark-crypto-akcipher
[2137/2552] Linking target tests/unit/check-qlist
[2138/2552] Linking target contrib/vhost-user-input/vhost-user-input
[2139/2552] Linking target tests/unit/check-qom-interface
[2140/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_plugin-gen.c.o
[2141/2552] Compiling C object tests/unit/test-qapi-util.p/test-qapi-util.c.o
[2142/2552] Compiling C object tests/unit/test-rcu-list.p/test-rcu-list.c.o
[2143/2552] Linking target tests/unit/check-qstring
[2144/2552] Compiling C object tests/unit/test-opts-visitor.p/test-opts-visitor.c.o
[2145/2552] Compiling C object tests/unit/test-bitops.p/test-bitops.c.o
[2146/2552] Compiling C object tests/unit/test-rcu-tailq.p/test-rcu-tailq.c.o
[2147/2552] Linking target tests/bench/qht-bench
[2148/2552] Linking target tests/unit/check-qnum
[2149/2552] Compiling C object tests/unit/test-rcu-simpleq.p/test-rcu-simpleq.c.o
[2150/2552] Linking target tests/unit/test-bitmap
[2151/2552] Compiling C object tests/unit/test-int128.p/test-int128.c.o
[2152/2552] Compiling C object tests/unit/test-rcu-slist.p/test-rcu-slist.c.o
[2153/2552] Compiling C object tests/unit/ptimer-test.p/ptimer-test-stubs.c.o
[2154/2552] Compiling C object tests/unit/test-coroutine.p/iothread.c.o
[2155/2552] Compiling C object tests/unit/test-aio.p/iothread.c.o
[2156/2552] Compiling C object tests/unit/test-string-input-visitor.p/test-string-input-visitor.c.o
[2157/2552] Compiling C object tests/unit/test-aio-multithread.p/iothread.c.o
[2158/2552] Linking target tests/bench/benchmark-crypto-cipher
[2159/2552] Linking target tests/unit/test-uuid
[2160/2552] Compiling C object tests/unit/test-qht.p/test-qht.c.o
[2161/2552] Compiling C object tests/unit/test-logging.p/test-logging.c.o
[2162/2552] Compiling C object tests/unit/rcutorture.p/rcutorture.c.o
[2163/2552] Linking target tests/vhost-user-bridge
[2164/2552] Compiling C object tests/unit/test-throttle.p/iothread.c.o
[2165/2552] Compiling C object tests/unit/test-interval-tree.p/test-interval-tree.c.o
[2166/2552] Compiling C object tests/unit/test-thread-pool.p/iothread.c.o
[2167/2552] Compiling C object tests/unit/test-smp-parse.p/test-smp-parse.c.o
[2168/2552] Compiling C object tests/unit/test-qmp-event.p/test-qmp-event.c.o
[2169/2552] Linking target tests/unit/check-qlit
[2170/2552] Linking target tests/unit/check-qdict
[2171/2552] Linking target tests/unit/test-div128
[2172/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/dump_dump.c.o
[2173/2552] Compiling C object tests/unit/check-block-qdict.p/check-block-qdict.c.o
[2174/2552] Compiling C object tests/unit/test-qdist.p/test-qdist.c.o
[2175/2552] Linking target tests/unit/test-shift128
[2176/2552] Linking target tests/unit/test-mul64
[2177/2552] Compiling C object tests/unit/test-hbitmap.p/iothread.c.o
[2178/2552] Compiling C object tests/unit/test-qgraph.p/test-qgraph.c.o
[2179/2552] Compiling C object tests/unit/test-bdrv-drain.p/iothread.c.o
[2180/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_net_virtio-net.c.o
[2181/2552] Compiling C object tests/unit/test-bdrv-graph-mod.p/iothread.c.o
[2182/2552] Linking target qemu-pr-helper
[2183/2552] Compiling C object tests/unit/test-visitor-serialization.p/test-visitor-serialization.c.o
[2184/2552] Compiling C object tests/unit/test-aio-multithread.p/test-aio-multithread.c.o
[2185/2552] Linking target tests/unit/check-qobject
[2186/2552] Compiling C object tests/unit/test-coroutine.p/test-coroutine.c.o
[2187/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/monitor_misc.c.o
[2188/2552] Compiling C object tests/unit/ptimer-test.p/_lkp_benchmarks_qemu_hw_core_ptimer.c.o
[2189/2552] Compiling C object tests/unit/test-blockjob.p/iothread.c.o
[2190/2552] Compiling C object tests/unit/test-blockjob-txn.p/iothread.c.o
[2191/2552] Linking target tests/unit/test-x86-cpuid
[2192/2552] Linking target tests/unit/test-bitcnt
[2193/2552] Compiling C object tests/unit/test-smp-parse.p/_lkp_benchmarks_qemu_hw_core_machine-smp.c.o
[2194/2552] Compiling C object tests/unit/test-block-iothread.p/iothread.c.o
[2195/2552] Compiling C object tests/unit/test-qgraph.p/.._qtest_libqos_qgraph.c.o
[2196/2552] Compiling C object tests/qtest/tpm-tis-test.p/tpm-tis-util.c.o
[2197/2552] Compiling C object tests/unit/test-block-backend.p/iothread.c.o
[2198/2552] Compiling C object tests/unit/check-qom-proplist.p/check-qom-proplist.c.o
[2199/2552] Linking target tests/unit/test-qapi-util
[2200/2552] Compiling C object tests/unit/test-thread-pool.p/test-thread-pool.c.o
[2201/2552] Compiling C object tests/unit/test-bdrv-graph-mod.p/test-bdrv-graph-mod.c.o
[2202/2552] Linking target tests/unit/test-bitops
[2203/2552] Linking target tests/unit/test-rcu-list
[2204/2552] Compiling C object tests/unit/test-write-threshold.p/iothread.c.o
[2205/2552] Linking target tests/unit/test-rcu-tailq
[2206/2552] Compiling C object tests/unit/test-qobject-output-visitor.p/test-qobject-output-visitor.c.o
[2207/2552] Linking target tests/unit/test-rcu-simpleq
[2208/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/gdbstub_gdbstub.c.o
[2209/2552] Linking target tests/unit/test-int128
[2210/2552] Compiling C object tests/unit/test-authz-simple.p/test-authz-simple.c.o
[2211/2552] Linking target tests/unit/test-rcu-slist
[2212/2552] Linking target tests/unit/test-logging
[2213/2552] Linking target tests/unit/rcutorture
[2214/2552] Compiling C object tests/unit/test-crypto-hmac.p/test-crypto-hmac.c.o
[2215/2552] Linking target tests/unit/test-qht
[2216/2552] Compiling C object tests/unit/test-block-backend.p/test-block-backend.c.o
[2217/2552] Compiling C object tests/unit/test-write-threshold.p/test-write-threshold.c.o
[2218/2552] Linking target tests/unit/test-interval-tree
[2219/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_kvm_kvm.c.o
[2220/2552] Compiling C object tests/unit/test-crypto-akcipher.p/test-crypto-akcipher.c.o
[2221/2552] Compiling C object tests/unit/test-crypto-der.p/test-crypto-der.c.o
[2222/2552] Compiling C object tests/unit/test-crypto-hash.p/test-crypto-hash.c.o
[2223/2552] Linking target qemu-io
[2224/2552] Compiling C object qga/qemu-ga.p/commands-posix.c.o
[2225/2552] Compiling C object tests/unit/test-crypto-cipher.p/test-crypto-cipher.c.o
[2226/2552] Compiling C object tests/unit/check-qjson.p/check-qjson.c.o
[2227/2552] Linking target tests/unit/check-block-qdict
[2228/2552] Linking target tests/unit/test-qdist
[2229/2552] Compiling C object tests/unit/test-blockjob-txn.p/test-blockjob-txn.c.o
[2230/2552] Compiling C object tests/unit/test-authz-list.p/test-authz-list.c.o
[2231/2552] Compiling C object tests/unit/test-io-task.p/iothread.c.o
[2232/2552] Compiling C object tests/unit/test-io-task.p/test-io-task.c.o
[2233/2552] Compiling C object tests/unit/test-io-channel-socket.p/socket-helpers.c.o
[2234/2552] Compiling C object tests/unit/test-authz-listfile.p/test-authz-listfile.c.o
[2235/2552] Compiling C object tests/unit/test-crypto-secret.p/test-crypto-secret.c.o
[2236/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_i386_intel_iommu.c.o
[2237/2552] Compiling C object tests/unit/test-io-channel-command.p/test-io-channel-command.c.o
[2238/2552] Compiling C object tests/unit/test-io-channel-buffer.p/test-io-channel-buffer.c.o
[2239/2552] Compiling C object tests/unit/test-io-channel-socket.p/io-channel-helpers.c.o
[2240/2552] Compiling C object tests/unit/test-blockjob.p/test-blockjob.c.o
[2241/2552] Compiling C object tests/unit/ptimer-test.p/ptimer-test.c.o
[2242/2552] Compiling C object tests/unit/test-crypto-ivgen.p/test-crypto-ivgen.c.o
[2243/2552] Compiling C object tests/unit/test-io-channel-file.p/io-channel-helpers.c.o
[2244/2552] Compiling C object tests/unit/test-io-channel-file.p/test-io-channel-file.c.o
[2245/2552] Compiling C object tests/unit/test-io-channel-command.p/io-channel-helpers.c.o
[2246/2552] Compiling C object tests/unit/test-qobject-input-visitor.p/test-qobject-input-visitor.c.o
[2247/2552] Compiling C object tests/unit/test-io-channel-buffer.p/io-channel-helpers.c.o
[2248/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_vfio_pci.c.o
[2249/2552] Compiling C object tests/unit/test-crypto-afsplit.p/test-crypto-afsplit.c.o
[2250/2552] Compiling C object tests/fp/fp-test.p/fp-test.c.o
[2251/2552] Compiling C object tests/unit/test-image-locking.p/iothread.c.o
[2252/2552] Linking target tests/unit/test-smp-parse
[2253/2552] Linking target tests/unit/check-qom-proplist
[2254/2552] Linking target tests/unit/test-qgraph
[2255/2552] Compiling C object tests/unit/test-fdmon-epoll.p/test-fdmon-epoll.c.o
[2256/2552] Compiling C object tests/libtestqapi.a.p/meson-generated_.._test-qapi-visit.c.o
[2257/2552] Compiling C object tests/unit/test-throttle.p/test-throttle.c.o
[2258/2552] Compiling C object tests/unit/test-io-channel-null.p/test-io-channel-null.c.o
[2259/2552] Linking target storage-daemon/qemu-storage-daemon
[2260/2552] Compiling C object tests/unit/test-bufferiszero.p/test-bufferiszero.c.o
[2261/2552] Compiling C object tests/unit/test-base64.p/test-base64.c.o
[2262/2552] Compiling C object tests/unit/test-fdmon-epoll.p/iothread.c.o
[2263/2552] Compiling C object tests/unit/test-timed-average.p/test-timed-average.c.o
[2264/2552] Compiling C object tests/unit/test-replication.p/iothread.c.o
[2265/2552] Compiling C object tests/unit/test-util-sockets.p/socket-helpers.c.o
[2266/2552] Compiling C object tests/unit/test-char.p/socket-helpers.c.o
[2267/2552] Compiling C object tests/unit/test-xbzrle.p/test-xbzrle.c.o
[2268/2552] Linking static target tests/libtestqapi.a
[2269/2552] Compiling C object tests/unit/test-yank.p/socket-helpers.c.o
[2270/2552] Compiling C object tests/unit/test-crypto-block.p/test-crypto-block.c.o
[2271/2552] Linking target tests/unit/test-authz-simple
[2272/2552] Compiling C object tests/unit/test-image-locking.p/test-image-locking.c.o
[2273/2552] Compiling C object tests/unit/test-qemu-opts.p/test-qemu-opts.c.o
[2274/2552] Linking target tests/qtest/usb-hcd-uhci-test
[2275/2552] Compiling C object tests/unit/test-io-channel-socket.p/test-io-channel-socket.c.o
[2276/2552] Compiling C object tests/qtest/test-filter-mirror.p/test-filter-mirror.c.o
[2277/2552] Linking target qemu-nbd
[2278/2552] Compiling C object tests/qtest/boot-serial-test.p/boot-serial-test.c.o
[2279/2552] Compiling C object tests/unit/test-yank.p/test-yank.c.o
[2280/2552] Compiling C object tests/qtest/i82801b11-test.p/i82801b11-test.c.o
[2281/2552] Linking target tests/unit/test-crypto-hmac
[2282/2552] Linking target tests/unit/check-qjson
[2283/2552] Compiling C object tests/qtest/intel-hda-test.p/intel-hda-test.c.o
[2284/2552] Linking target tests/unit/test-crypto-der
[2285/2552] Compiling C object tests/unit/test-block-iothread.p/test-block-iothread.c.o
[2286/2552] Compiling C object tests/qtest/pvpanic-test.p/pvpanic-test.c.o
[2287/2552] Compiling C object tests/qtest/lpc-ich9-test.p/lpc-ich9-test.c.o
[2288/2552] Compiling C object tests/unit/test-qdev-global-props.p/test-qdev-global-props.c.o
[2289/2552] Compiling C object tests/unit/test-keyval.p/test-keyval.c.o
[2290/2552] Compiling C object tests/unit/test-qga.p/.._qtest_libqmp.c.o
[2291/2552] Compiling C object tests/qtest/ioh3420-test.p/ioh3420-test.c.o
[2292/2552] Linking target tests/unit/test-authz-list
[2293/2552] Linking target tests/unit/test-crypto-hash
[2294/2552] Compiling C object tests/qtest/test-filter-redirector.p/test-filter-redirector.c.o
[2295/2552] Compiling C object tests/qtest/pvpanic-pci-test.p/pvpanic-pci-test.c.o
[2296/2552] Compiling C object tests/unit/test-aio.p/test-aio.c.o
[2297/2552] Linking target qga/qemu-ga
[2298/2552] Linking target tests/unit/test-authz-listfile
[2299/2552] Compiling C object tests/qtest/usb-hcd-xhci-test.p/usb-hcd-xhci-test.c.o
[2300/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/hw_virtio_virtio.c.o
[2301/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_optimize.c.o
[2302/2552] Linking target tests/unit/ptimer-test
[2303/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_seg_helper.c.o
[2304/2552] Compiling C object tests/qtest/wdt_ib700-test.p/wdt_ib700-test.c.o
[2305/2552] Compiling C object tests/qtest/ipmi-kcs-test.p/ipmi-kcs-test.c.o
[2306/2552] Linking target tests/unit/test-crypto-akcipher
[2307/2552] Linking target tests/unit/test-crypto-cipher
[2308/2552] Compiling C object tests/qtest/usb-hcd-ehci-test.p/usb-hcd-ehci-test.c.o
[2309/2552] Compiling C object tests/unit/test-replication.p/test-replication.c.o
[2310/2552] Linking target tests/unit/test-crypto-secret
[2311/2552] Compiling C object tests/qtest/device-introspect-test.p/device-introspect-test.c.o
[2312/2552] Compiling C object tests/unit/test-util-sockets.p/test-util-sockets.c.o
[2313/2552] Linking target tests/unit/test-bufferiszero
[2314/2552] Compiling C object tests/qtest/ipmi-bt-test.p/ipmi-bt-test.c.o
[2315/2552] Compiling C object tests/qtest/tpm-crb-test.p/tpm-tests.c.o
[2316/2552] Linking target tests/unit/test-base64
[2317/2552] Compiling C object tests/unit/test-util-filemonitor.p/test-util-filemonitor.c.o
[2318/2552] Compiling C object tests/qtest/tpm-crb-swtpm-test.p/tpm-crb-swtpm-test.c.o
[2319/2552] Linking target tests/unit/test-crypto-ivgen
[2320/2552] Linking target tests/unit/test-timed-average
[2321/2552] Linking target tests/unit/test-coroutine
[2322/2552] Linking target tests/unit/test-aio-multithread
[2323/2552] Linking target tests/unit/test-io-channel-file
[2324/2552] Compiling C object tests/qtest/tpm-tis-test.p/tpm-tis-test.c.o
[2325/2552] Linking target tests/unit/test-string-input-visitor
[2326/2552] Linking target tests/unit/test-string-output-visitor
[2327/2552] Linking target tests/unit/test-crypto-afsplit
[2328/2552] Linking target tests/unit/test-io-channel-buffer
[2329/2552] Compiling C object tests/qtest/tpm-tis-test.p/tpm-tests.c.o
[2330/2552] Compiling C object tests/qtest/tpm-tis-swtpm-test.p/tpm-tis-swtpm-test.c.o
[2331/2552] Compiling C object tests/qtest/tpm-crb-swtpm-test.p/tpm-tests.c.o
[2332/2552] Linking target tests/unit/test-opts-visitor
[2333/2552] Compiling C object tests/qtest/tpm-tis-test.p/tpm-util.c.o
[2334/2552] Linking target tests/unit/test-io-channel-command
[2335/2552] Compiling C object tests/qtest/fuzz-e1000e-test.p/fuzz-e1000e-test.c.o
[2336/2552] Linking target tests/unit/test-visitor-serialization
[2337/2552] Compiling C object tests/qtest/tpm-crb-test.p/tpm-emu.c.o
[2338/2552] Compiling C object tests/fp/fp-bench.p/fp-bench.c.o
[2339/2552] Compiling C object tests/qtest/tpm-crb-swtpm-test.p/tpm-emu.c.o
[2340/2552] Compiling C object tests/qtest/tpm-crb-test.p/tpm-util.c.o
[2341/2552] Compiling C object tests/qtest/fuzz-virtio-scsi-test.p/fuzz-virtio-scsi-test.c.o
[2342/2552] Linking target tests/unit/test-io-channel-null
[2343/2552] Compiling C object tests/unit/test-qmp-cmds.p/test-qmp-cmds.c.o
[2344/2552] Linking target tests/unit/test-forward-visitor
[2345/2552] Compiling C object tests/unit/test-hbitmap.p/test-hbitmap.c.o
[2346/2552] Compiling C object tests/qtest/tpm-tis-swtpm-test.p/tpm-tests.c.o
[2347/2552] Linking target tests/unit/test-qemu-opts
[2348/2552] Linking target tests/unit/test-clone-visitor
[2349/2552] Compiling C object tests/qtest/fuzz-megasas-test.p/fuzz-megasas-test.c.o
[2350/2552] Compiling C object tests/qtest/tpm-tis-test.p/tpm-emu.c.o
[2351/2552] Linking target tests/unit/test-qobject-output-visitor
[2352/2552] Compiling C object tests/qtest/tpm-crb-swtpm-test.p/tpm-util.c.o
[2353/2552] Compiling C object tests/qtest/fuzz-sb16-test.p/fuzz-sb16-test.c.o
[2354/2552] Linking target tests/unit/test-qobject-input-visitor
[2355/2552] Compiling C object tests/unit/test-iov.p/test-iov.c.o
[2356/2552] Linking target tests/unit/test-thread-pool
[2357/2552] Compiling C object tests/qtest/tpm-crb-test.p/tpm-crb-test.c.o
[2358/2552] Compiling C object tests/qtest/fuzz-lsi53c895a-test.p/fuzz-lsi53c895a-test.c.o
[2359/2552] Compiling C object tests/qtest/fuzz-sdcard-test.p/fuzz-sdcard-test.c.o
[2360/2552] Linking target tests/unit/test-bdrv-graph-mod
[2361/2552] Linking target tests/unit/test-xbzrle
[2362/2552] Linking target tests/unit/test-keyval
[2363/2552] Compiling C object tests/fp/fp-test.p/berkeley-testfloat-3_source_slowfloat.c.o
[2364/2552] Compiling C object tests/qtest/rtl8139-test.p/rtl8139-test.c.o
[2365/2552] Linking target tests/unit/test-qdev-global-props
[2366/2552] Compiling C object tests/qtest/qos-test.p/emc141x-test.c.o
[2367/2552] Linking target tests/unit/test-qmp-event
[2368/2552] Compiling C object tests/qtest/bios-tables-test.p/boot-sector.c.o
[2369/2552] Compiling C object tests/qtest/tpm-tis-swtpm-test.p/tpm-emu.c.o
[2370/2552] Linking target tests/qtest/test-filter-mirror
[2371/2552] Linking target tests/qtest/i82801b11-test
[2372/2552] Compiling C object tests/qtest/am53c974-test.p/am53c974-test.c.o
[2373/2552] Linking target tests/qtest/boot-serial-test
[2374/2552] Compiling C object tests/qtest/tpm-tis-swtpm-test.p/tpm-util.c.o
[2375/2552] Linking target tests/unit/test-block-backend
[2376/2552] Linking target tests/qtest/ioh3420-test
[2377/2552] Linking target tests/qtest/lpc-ich9-test
[2378/2552] Linking target tests/unit/test-write-threshold
[2379/2552] Linking target tests/qtest/pvpanic-test
[2380/2552] Linking target tests/qtest/pvpanic-pci-test
[2381/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_kvm_kvm-all.c.o
[2382/2552] Compiling C object tests/unit/test-bdrv-drain.p/test-bdrv-drain.c.o
[2383/2552] Linking target tests/unit/test-io-channel-socket
[2384/2552] Linking target tests/unit/test-crypto-block
[2385/2552] Linking target tests/qtest/test-filter-redirector
[2386/2552] Linking target tests/qtest/usb-hcd-xhci-test
[2387/2552] Linking target tests/qtest/intel-hda-test
[2388/2552] Linking target tests/qtest/ipmi-kcs-test
[2389/2552] Linking target tests/qtest/wdt_ib700-test
[2390/2552] Linking target tests/unit/test-util-sockets
[2391/2552] Linking target tests/unit/test-yank
[2392/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/migration_ram.c.o
[2393/2552] Linking target tests/unit/test-util-filemonitor
[2394/2552] Compiling C object tests/qtest/bios-tables-test.p/acpi-utils.c.o
[2395/2552] Linking target tests/qtest/usb-hcd-ehci-test
[2396/2552] Linking target tests/unit/test-io-task
[2397/2552] Linking target tests/unit/test-blockjob-txn
[2398/2552] Linking target tests/qtest/device-introspect-test
[2399/2552] Compiling C object tests/qtest/display-vga-test.p/display-vga-test.c.o
[2400/2552] Compiling C object tests/qtest/tpm-tis-swtpm-test.p/tpm-tis-util.c.o
[2401/2552] Compiling C object tests/qtest/erst-test.p/erst-test.c.o
[2402/2552] Linking target tests/unit/test-blockjob
[2403/2552] Compiling C object tests/qtest/endianness-test.p/endianness-test.c.o
[2404/2552] Compiling C object tests/unit/test-vmstate.p/test-vmstate.c.o
[2405/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_tcg-runtime-gvec.c.o
[2406/2552] Linking target tests/qtest/fuzz-e1000e-test
[2407/2552] Linking target tests/qtest/ipmi-bt-test
[2408/2552] Compiling C object tests/qtest/cxl-test.p/cxl-test.c.o
[2409/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_memory.c.o
[2410/2552] Linking target tests/unit/test-qmp-cmds
[2411/2552] Linking target tests/unit/test-throttle
[2412/2552] Linking target tests/unit/test-iov
[2413/2552] Compiling C object tests/qtest/boot-order-test.p/boot-order-test.c.o
[2414/2552] Compiling C object tests/qtest/machine-none-test.p/machine-none-test.c.o
[2415/2552] Compiling C object tests/unit/test-qga.p/test-qga.c.o
[2416/2552] Linking target tests/qtest/fuzz-virtio-scsi-test
[2417/2552] Compiling C object tests/unit/test-char.p/test-char.c.o
[2418/2552] Compiling C object tests/qtest/bios-tables-test.p/tpm-emu.c.o
[2419/2552] Linking target tests/unit/test-fdmon-epoll
[2420/2552] Compiling C object tests/qtest/qos-test.p/ac97-test.c.o
[2421/2552] Linking target tests/qtest/fuzz-sb16-test
[2422/2552] Linking target tests/qtest/fuzz-megasas-test
[2423/2552] Compiling C object tests/qtest/test-hmp.p/test-hmp.c.o
[2424/2552] Compiling C object tests/qtest/vmgenid-test.p/boot-sector.c.o
[2425/2552] Linking target tests/qtest/fuzz-sdcard-test
[2426/2552] Compiling C object tests/qtest/cdrom-test.p/cdrom-test.c.o
[2427/2552] Compiling C object tests/qtest/qos-test.p/ipoctal232-test.c.o
[2428/2552] Compiling C object qemu-img.p/qemu-img.c.o
[2429/2552] Compiling C object tests/qtest/qos-test.p/es1370-test.c.o
[2430/2552] Compiling C object tests/qtest/cdrom-test.p/boot-sector.c.o
[2431/2552] Linking target tests/unit/test-image-locking
[2432/2552] Linking target tests/qtest/fuzz-lsi53c895a-test
[2433/2552] Linking target tests/qtest/rtl8139-test
[2434/2552] Compiling C object tests/qtest/device-plug-test.p/device-plug-test.c.o
[2435/2552] Compiling C object tests/qtest/ivshmem-test.p/.._.._contrib_ivshmem-server_ivshmem-server.c.o
[2436/2552] Compiling C object tests/qtest/qos-test.p/ne2000-test.c.o
[2437/2552] Compiling C object tests/qtest/fw_cfg-test.p/fw_cfg-test.c.o
[2438/2552] Compiling C object tests/qtest/cpu-plug-test.p/cpu-plug-test.c.o
[2439/2552] Compiling C object tests/qtest/qos-test.p/pci-test.c.o
[2440/2552] Compiling C object tests/qtest/qos-test.p/ds1338-test.c.o
[2441/2552] Compiling C object tests/qtest/qos-test.p/e1000-test.c.o
[2442/2552] Compiling C object tests/qtest/vmgenid-test.p/vmgenid-test.c.o
[2443/2552] Compiling C object tests/qtest/vmgenid-test.p/acpi-utils.c.o
[2444/2552] Linking target tests/qtest/tpm-crb-test
[2445/2552] Linking target tests/qtest/tpm-crb-swtpm-test
[2446/2552] Compiling C object tests/qtest/qos-test.p/eepro100-test.c.o
[2447/2552] Linking target tests/qtest/am53c974-test
[2448/2552] Linking target tests/unit/test-block-iothread
[2449/2552] Compiling C object tests/qtest/ivshmem-test.p/ivshmem-test.c.o
[2450/2552] Compiling C object tests/qtest/qmp-cmd-test.p/qmp-cmd-test.c.o
[2451/2552] Compiling C object tests/qtest/qom-test.p/qom-test.c.o
[2452/2552] Compiling C object tests/qtest/qos-test.p/megasas-test.c.o
[2453/2552] Compiling C object tests/qtest/drive_del-test.p/drive_del-test.c.o
[2454/2552] Compiling C object tests/qtest/qos-test.p/lsm303dlhc-mag-test.c.o
[2455/2552] Compiling C object tests/qtest/fdc-test.p/fdc-test.c.o
[2456/2552] Compiling C object tests/qtest/test-x86-cpuid-compat.p/test-x86-cpuid-compat.c.o
[2457/2552] Compiling C object tests/qtest/migration-test.p/migration-helpers.c.o
[2458/2552] Compiling C object tests/qtest/i440fx-test.p/i440fx-test.c.o
[2459/2552] Linking target tests/unit/test-aio
[2460/2552] Linking target tests/qtest/endianness-test
[2461/2552] Linking target tests/unit/test-replication
[2462/2552] Compiling C object tests/qtest/qos-test.p/virtio-rng-test.c.o
[2463/2552] Linking target tests/qtest/erst-test
[2464/2552] Linking target tests/qtest/tpm-tis-test
[2465/2552] Linking target tests/unit/test-qga
[2466/2552] Linking target tests/qtest/cxl-test
[2467/2552] Compiling C object tests/qtest/qos-test.p/spapr-phb-test.c.o
[2468/2552] Compiling C object tests/qtest/qmp-test.p/qmp-test.c.o
[2469/2552] Compiling C object tests/qtest/qos-test.p/tulip-test.c.o
[2470/2552] Linking target tests/qtest/display-vga-test
[2471/2552] Compiling C object tests/qtest/qos-test.p/pcnet-test.c.o
[2472/2552] Compiling C object tests/qtest/qos-test.p/pca9552-test.c.o
[2473/2552] Compiling C object tests/qtest/qos-test.p/virtio-test.c.o
[2474/2552] Compiling C object tests/qtest/qos-test.p/usb-hcd-ohci-test.c.o
[2475/2552] Compiling C object tests/qtest/qos-test.p/virtio-serial-test.c.o
[2476/2552] Compiling C object tests/qtest/qos-test.p/vmxnet3-test.c.o
[2477/2552] Compiling C object tests/qtest/qos-test.p/nvme-test.c.o
[2478/2552] Linking target tests/unit/test-vmstate
[2479/2552] Linking target tests/unit/test-hbitmap
[2480/2552] Compiling C object tests/qtest/ahci-test.p/ahci-test.c.o
[2481/2552] Linking target tests/qtest/machine-none-test
[2482/2552] Compiling C object tests/qtest/qos-test.p/sdhci-test.c.o
[2483/2552] Compiling C object tests/qtest/qos-test.p/tmp105-test.c.o
[2484/2552] Compiling C object tests/qtest/ide-test.p/ide-test.c.o
[2485/2552] Linking target tests/qtest/test-hmp
[2486/2552] Compiling C object tests/qtest/qos-test.p/e1000e-test.c.o
[2487/2552] Linking target tests/qtest/tpm-tis-swtpm-test
[2488/2552] Compiling C object tests/qtest/qos-test.p/adm1272-test.c.o
[2489/2552] Compiling C object tests/qtest/qos-test.p/virtio-scsi-test.c.o
[2490/2552] Compiling C object tests/qtest/qos-test.p/qos-test.c.o
[2491/2552] Linking target tests/unit/test-char
[2492/2552] Linking target tests/qtest/cdrom-test
[2493/2552] Linking target tests/qtest/device-plug-test
[2494/2552] Linking target tests/qtest/cpu-plug-test
[2495/2552] Compiling C object tests/qtest/qos-test.p/virtio-net-test.c.o
[2496/2552] Linking target tests/qtest/fw_cfg-test
[2497/2552] Compiling C object tests/qtest/q35-test.p/q35-test.c.o
[2498/2552] Linking target tests/qtest/vmgenid-test
[2499/2552] Compiling C object tests/qtest/qos-test.p/virtio-iommu-test.c.o
[2500/2552] Linking target tests/qtest/boot-order-test
[2501/2552] Linking target tests/qtest/ivshmem-test
[2502/2552] Linking target tests/qtest/qom-test
[2503/2552] Linking target tests/qtest/fdc-test
[2504/2552] Linking target tests/qtest/qmp-cmd-test
[2505/2552] Linking target tests/qtest/drive_del-test
[2506/2552] Linking target tests/qtest/test-x86-cpuid-compat
[2507/2552] Compiling C object tests/qtest/readconfig-test.p/readconfig-test.c.o
[2508/2552] Linking target tests/unit/test-bdrv-drain
[2509/2552] Compiling C object tests/qtest/numa-test.p/numa-test.c.o
[2510/2552] Linking target tests/qtest/i440fx-test
[2511/2552] Compiling C object tests/qtest/tco-test.p/tco-test.c.o
[2512/2552] Compiling C object libcommon.fa.p/hw_nvme_ctrl.c.o
[2513/2552] Compiling C object tests/qtest/hd-geo-test.p/hd-geo-test.c.o
[2514/2552] Compiling C object tests/unit/test-cutils.p/test-cutils.c.o
[2515/2552] Linking target tests/qtest/qmp-test
[2516/2552] Compiling C object tests/qtest/qos-test.p/max34451-test.c.o
[2517/2552] Linking target tests/qtest/ahci-test
[2518/2552] Compiling C object tests/qtest/qos-test.p/vhost-user-blk-test.c.o
[2519/2552] Linking target tests/qtest/ide-test
[2520/2552] Compiling C object tests/qtest/qos-test.p/vhost-user-test.c.o
[2521/2552] Linking target qemu-img
[2522/2552] Compiling C object tests/qtest/qos-test.p/isl_pmbus_vr-test.c.o
[2523/2552] Compiling C object tests/qtest/qos-test.p/virtio-blk-test.c.o
[2524/2552] Linking target tests/qtest/q35-test
[2525/2552] Compiling C object tests/qtest/rtc-test.p/rtc-test.c.o
[2526/2552] Linking target tests/unit/test-cutils
[2527/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/softmmu_physmem.c.o
[2528/2552] Linking target tests/qtest/readconfig-test
[2529/2552] Linking target tests/qtest/numa-test
[2530/2552] Linking target tests/qtest/tco-test
[2531/2552] Compiling C object tests/qtest/migration-test.p/migration-test.c.o
[2532/2552] Compiling C object tests/qtest/bios-tables-test.p/bios-tables-test.c.o
[2533/2552] Linking target tests/qtest/hd-geo-test
[2534/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_tcg-op.c.o
[2535/2552] Linking target tests/qtest/rtc-test
[2536/2552] Linking target tests/qtest/migration-test
[2537/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_tcg-op-gvec.c.o
[2538/2552] Linking target tests/qtest/qos-test
[2539/2552] Linking target tests/qtest/bios-tables-test
[2540/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/tcg_tcg.c.o
[2541/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/accel_tcg_cputlb.c.o
[2542/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_fpu_helper.c.o
[2543/2552] Compiling C object libcommon.fa.p/hw_display_cirrus_vga.c.o
[2544/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/fpu_softfloat.c.o
[2545/2552] Compiling C object tests/fp/fp-bench.p/.._.._fpu_softfloat.c.o
[2546/2552] Compiling C object tests/fp/fp-test-log2.p/.._.._fpu_softfloat.c.o
[2547/2552] Linking target tests/fp/fp-bench
[2548/2552] Compiling C object tests/fp/fp-test.p/.._.._fpu_softfloat.c.o
[2549/2552] Linking target tests/fp/fp-test-log2
[2550/2552] Linking target tests/fp/fp-test
[2551/2552] Compiling C object libqemu-x86_64-softmmu.fa.p/target_i386_tcg_translate.c.o
[2552/2552] Linking target qemu-system-x86_64
make[1]: Leaving directory '/lkp/benchmarks/qemu/build'
changing dir to build for make ""...
make[1]: Entering directory '/lkp/benchmarks/qemu/build'
  GIT     ui/keycodemapdb meson tests/fp/berkeley-testfloat-3 tests/fp/berkeley-softfloat-3 dtc
[1/50] Generating tests/include/QAPI test (include) with a custom command
[2/18] Generating qemu-version.h with a custom command (wrapped by meson to capture output)
make[1]: Leaving directory '/lkp/benchmarks/qemu/build'
2023-01-15 18:58:59 ./run_tests.sh
^[[32mPASS^[[0m apic-split (56 tests)
^[[32mPASS^[[0m ioapic-split (19 tests)
^[[32mPASS^[[0m x2apic (56 tests)
^[[31mFAIL^[[0m xapic (timeout; duration=60)
^[[32mPASS^[[0m ioapic (26 tests)
^[[33mSKIP^[[0m cmpxchg8b (i386 only)
^[[32mPASS^[[0m smptest (1 tests)
^[[32mPASS^[[0m smptest3 (1 tests)
^[[32mPASS^[[0m vmexit_cpuid 
^[[32mPASS^[[0m vmexit_vmcall 
^[[32mPASS^[[0m vmexit_mov_from_cr8 
^[[32mPASS^[[0m vmexit_mov_to_cr8 
^[[32mPASS^[[0m vmexit_inl_pmtimer 
^[[32mPASS^[[0m vmexit_ipi 
^[[32mPASS^[[0m vmexit_ipi_halt 
^[[32mPASS^[[0m vmexit_ple_round_robin 
^[[32mPASS^[[0m vmexit_tscdeadline 
^[[32mPASS^[[0m vmexit_tscdeadline_immed 
^[[32mPASS^[[0m vmexit_cr0_wp 
^[[32mPASS^[[0m vmexit_cr4_pge 
^[[32mPASS^[[0m access 
^[[32mPASS^[[0m access-reduced-maxphyaddr 
^[[32mPASS^[[0m smap (18 tests)
^[[32mPASS^[[0m pku (7 tests)
^[[33mSKIP^[[0m pks (0 tests)
^[[33mSKIP^[[0m asyncpf (0 tests)
^[[32mPASS^[[0m emulator (136 tests, 2 skipped)
^[[32mPASS^[[0m eventinj (13 tests)
^[[32mPASS^[[0m hypercall (2 tests)
^[[32mPASS^[[0m idt_test (4 tests)
^[[32mPASS^[[0m memory (7 tests)
^[[32mPASS^[[0m msr (303 tests)
^[[31mFAIL^[[0m pmu (251 tests, 18 unexpected failures)
^[[32mPASS^[[0m pmu_lbr (3 tests)
^[[32mPASS^[[0m pmu_pebs (340 tests)
^[[33mSKIP^[[0m vmware_backdoors (/sys/module/kvm/parameters/enable_vmware_backdoor not equal to Y)
^[[32mPASS^[[0m realmode 
^[[32mPASS^[[0m s3 
^[[32mPASS^[[0m setjmp (10 tests)
^[[32mPASS^[[0m sieve 
^[[32mPASS^[[0m syscall (2 tests)
^[[32mPASS^[[0m tsc (3 tests)
^[[32mPASS^[[0m tsc_adjust (6 tests)
^[[32mPASS^[[0m xsave (17 tests)
^[[32mPASS^[[0m rmap_chain 
^[[33mSKIP^[[0m svm (0 tests)
^[[33mSKIP^[[0m svm_pause_filter (0 tests)
^[[33mSKIP^[[0m svm_npt (0 tests)
^[[33mSKIP^[[0m taskswitch (i386 only)
^[[33mSKIP^[[0m taskswitch2 (i386 only)
^[[32mPASS^[[0m kvmclock_test 
^[[32mPASS^[[0m pcid-enabled (2 tests)
^[[32mPASS^[[0m pcid-disabled (2 tests)
^[[32mPASS^[[0m pcid-asymmetric (2 tests)
^[[32mPASS^[[0m rdpru (1 tests)
^[[32mPASS^[[0m umip (21 tests)
^[[33mSKIP^[[0m la57 (i386 only)
^[[32mPASS^[[0m vmx (430055 tests, 2 expected failures, 5 skipped)
^[[32mPASS^[[0m ept (6564 tests)
^[[32mPASS^[[0m vmx_eoi_bitmap_ioapic_scan (7 tests)
^[[32mPASS^[[0m vmx_hlt_with_rvi_test (7 tests)
^[[32mPASS^[[0m vmx_apicv_test (9239 tests)
^[[32mPASS^[[0m vmx_apic_passthrough_thread (8 tests)
^[[32mPASS^[[0m vmx_init_signal_test (11 tests)
^[[32mPASS^[[0m vmx_sipi_signal_test (12 tests)
^[[32mPASS^[[0m vmx_apic_passthrough_tpr_threshold_test (6 tests)
^[[32mPASS^[[0m vmx_vmcs_shadow_test (153979 tests)
^[[32mPASS^[[0m vmx_pf_exception_test (66 tests)
^[[32mPASS^[[0m vmx_pf_vpid_test (18677860 tests)
^[[32mPASS^[[0m vmx_pf_invvpid_test (18677860 tests)
^[[32mPASS^[[0m vmx_pf_no_vpid_test (18677860 tests)
^[[32mPASS^[[0m vmx_pf_exception_test_reduced_maxphyaddr (17400142 tests)
^[[32mPASS^[[0m debug (22 tests)
^[[32mPASS^[[0m hyperv_synic (1 tests)
^[[32mPASS^[[0m hyperv_connections (7 tests)
^[[32mPASS^[[0m hyperv_stimer (12 tests)
^[[32mPASS^[[0m hyperv_clock 
^[[32mPASS^[[0m intel_iommu (11 tests)
^[[32mPASS^[[0m tsx-ctrl (7 tests)
^[[33mSKIP^[[0m intel_cet (0 tests)

[-- Attachment #6: job.yaml --]
[-- Type: text/plain, Size: 4957 bytes --]

---

#! jobs/kvm-unit-tests-qemu.yaml
suite: kvm-unit-tests-qemu
testcase: kvm-unit-tests-qemu
category: functional
timeout: 35m
qemu_branch: qemu/master
qemu_commit: 222059a0fccf4af3be776fe35a5ea2d6a68f9a0b
qemu_config: x86_64-softmmu
kvm-unit-tests-qemu:
job_origin: kvm-unit-tests-qemu.yaml

#! queue options
queue_cmdline_keys:
- branch
- commit
queue: bisect
testbox: lkp-icl-2sp4
tbox_group: lkp-icl-2sp4
submit_id: 63c3f14220cb427735f8b8d1
job_file: "/lkp/jobs/scheduled/lkp-icl-2sp4/kvm-unit-tests-qemu-defaults-debian-11.1-x86_64-20220510.cgz-99e2853d906a7593e6a3f0e5bc7ecc503b6b9462-20230115-30517-qidsts-0.yaml"
id: af552666b0c3adddddfff188f5f8a270d2388822
queuer_version: "/zday/lkp"

#! hosts/lkp-icl-2sp4
model: Ice Lake
nr_node: 2
nr_cpu: 128
memory: 128G
nr_ssd_partitions: 3
nr_hdd_partitions: 6
hdd_partitions: "/dev/disk/by-id/ata-WDC_WD20SPZX-08UA7_WD-WXE2EA0ECVAS-part*"
ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BA800G3_BTTV34510181800JGN-part*"
rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2BB240G4_CVWL422602EB240NGN-part1"
kernel_cmdline_hw: acpi_rsdp=0x69ffd014
brand: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz

#! include/category/functional
kmsg:
heartbeat:
meminfo:

#! include/kvm-unit-tests-qemu
need_kconfig:
- KVM: m
- KVM_INTEL: m
- X86_INTEL_TSX_MODE_OFF: n
- X86_INTEL_TSX_MODE_AUTO: y
- X86_INTEL_TSX_MODE_ON: n

#! include/queue/cyclic
commit: 99e2853d906a7593e6a3f0e5bc7ecc503b6b9462

#! include/testbox/lkp-icl-2sp4
ucode: '0xd000363'
bisect_dmesg: true
kconfig: x86_64-rhel-8.3-kvm
enqueue_time: 2023-01-15 20:27:46.748062195 +08:00
_id: 63c3f14220cb427735f8b8d1
_rt: "/result/kvm-unit-tests-qemu/defaults/lkp-icl-2sp4/debian-11.1-x86_64-20220510.cgz/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/x86_64-softmmu/222059a0fccf4af3be776fe35a5ea2d6a68f9a0b"

#! schedule options
user: lkp
compiler: gcc-11
LKP_SERVER: internal-lkp-server
head_commit: 21041184c4351d783bba9e9d3716ed6317b8e808
base_commit: 88603b6dc419445847923fcb7fe5080067a30f98
branch: linux-devel/devel-hourly-20230104-153251
rootfs: debian-11.1-x86_64-20220510.cgz
result_root: "/result/kvm-unit-tests-qemu/defaults/lkp-icl-2sp4/debian-11.1-x86_64-20220510.cgz/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/x86_64-softmmu/222059a0fccf4af3be776fe35a5ea2d6a68f9a0b/0"
scheduler_version: "/lkp/lkp/src"
arch: x86_64
max_uptime: 2100
initrd: "/osimage/debian/debian-11.1-x86_64-20220510.cgz"
bootloader_append:
- root=/dev/ram0
- RESULT_ROOT=/result/kvm-unit-tests-qemu/defaults/lkp-icl-2sp4/debian-11.1-x86_64-20220510.cgz/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/x86_64-softmmu/222059a0fccf4af3be776fe35a5ea2d6a68f9a0b/0
- BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/vmlinuz-6.1.0-rc8-00451-g99e2853d906a
- branch=linux-devel/devel-hourly-20230104-153251
- job=/lkp/jobs/scheduled/lkp-icl-2sp4/kvm-unit-tests-qemu-defaults-debian-11.1-x86_64-20220510.cgz-99e2853d906a7593e6a3f0e5bc7ecc503b6b9462-20230115-30517-qidsts-0.yaml
- user=lkp
- ARCH=x86_64
- kconfig=x86_64-rhel-8.3-kvm
- commit=99e2853d906a7593e6a3f0e5bc7ecc503b6b9462
- initcall_debug
- nmi_watchdog=0
- acpi_rsdp=0x69ffd014
- max_uptime=2100
- LKP_SERVER=internal-lkp-server
- nokaslr
- selinux=0
- debug
- apic=debug
- sysrq_always_enabled
- rcupdate.rcu_cpu_stall_timeout=100
- net.ifnames=0
- printk.devkmsg=on
- panic=-1
- softlockup_panic=1
- nmi_watchdog=panic
- oops=panic
- load_ramdisk=2
- prompt_ramdisk=0
- drbd.minor_count=8
- systemd.log_level=err
- ignore_loglevel
- console=tty0
- earlyprintk=ttyS0,115200
- console=ttyS0,115200
- vga=normal
- rw

#! runtime status
modules_initrd: "/pkg/linux/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/modules.cgz"
bm_initrd: "/osimage/deps/debian-11.1-x86_64-20220510.cgz/run-ipconfig_20220515.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/lkp_20220513.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/rsync-rootfs_20220515.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/kvm-unit-tests-qemu_20220726.cgz,/osimage/pkg/debian-11.1-x86_64-20220510.cgz/kvm-unit-tests-x86_64-e11a0e2-1_20230106.cgz,/osimage/deps/debian-11.1-x86_64-20220510.cgz/hw_20220526.cgz"
ucode_initrd: "/osimage/ucode/intel-ucode-20220804.cgz"
lkp_initrd: "/osimage/user/lkp/lkp-x86_64.cgz"
site: inn

#! /db/releases/20230105220729/lkp-src/include/site/inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
oom-killer:
watchdog:
last_kernel: 6.2.0-rc3-00321-gb7bfaa761d76
schedule_notify_address:

#! user overrides
kernel: "/pkg/linux/x86_64-rhel-8.3-kvm/gcc-11/99e2853d906a7593e6a3f0e5bc7ecc503b6b9462/vmlinuz-6.1.0-rc8-00451-g99e2853d906a"
dequeue_time: 2023-01-15 21:09:23.544879871 +08:00

#! /db/releases/20230113175433/lkp-src/include/site/inn
job_state: finished
loadavg: 1.22 1.36 1.85 1/1031 34917
start_time: '1673788295'
end_time: '1673789375'
version: "/lkp/lkp/.src-20230111-092942:5b9bf8a4a:d984198af"

[-- Attachment #7: reproduce --]
[-- Type: text/plain, Size: 149 bytes --]

 "git" "checkout" "-q" "222059a0fccf4af3be776fe35a5ea2d6a68f9a0b"
 "./configure" "--target-list=x86_64-softmmu"
 "make" "-j" "128"
 "./run_tests.sh"

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2023-01-04  6:29       ` Mingwei Zhang
  2023-01-04  6:57         ` Mingwei Zhang
@ 2023-01-18 17:36         ` Sean Christopherson
  1 sibling, 0 replies; 47+ messages in thread
From: Sean Christopherson @ 2023-01-18 17:36 UTC (permalink / raw)
  To: Mingwei Zhang
  Cc: Vipin Sharma, pbonzini, bgardon, dmatlack, kvm, linux-kernel

On Tue, Jan 03, 2023, Mingwei Zhang wrote:
> On Tue, Jan 3, 2023 at 5:00 PM Vipin Sharma <vipinsh@google.com> wrote:
> > > I think the mmu_cache allocation and deallocation may cause the usage
> > > of GFP_ATOMIC (as observed by other reviewers as well). Adding a new
> > > lock would definitely sound like a plan, but I think it might affect
> > > the performance. Alternatively, I am wondering if we could use a
> > > mmu_cache_sequence similar to mmu_notifier_seq to help avoid the
> > > concurrency?
> > >
> >
> > Can you explain more about the performance impact? Each vcpu will have
> > its own mutex. So, only contention will be with the mmu_shrinker. This
> > shrinker will use mutex_try_lock() which will not block to wait for
> > the lock, it will just pass on to the next vcpu. While shrinker is
> > holding the lock, vcpu will be blocked in the page fault path but I
> > think it should not have a huge impact considering it will execute
> > rarely and for a small time.
> >
> > > Similar to mmu_notifier_seq, mmu_cache_sequence should be protected by
> > > mmu write lock. In the page fault path, each vcpu has to collect a
> > > snapshot of  mmu_cache_sequence before calling into
> > > mmu_topup_memory_caches() and check the value again when holding the
> > > mmu lock. If the value is different, that means the mmu_shrinker has
> > > removed the cache objects and because of that, the vcpu should retry.
> > >
> >
> > Yeah, this can be one approach. I think it will come down to the
> > performance impact of using mutex which I don't think should be a
> > concern.
> 
> hmm, I think you are right that there is no performance overhead by
> adding a mutex and letting the shrinker using mutex_trylock(). The
> point of using a sequence counter is to avoid the new lock, since
> introducing a new lock will increase management burden.

No, more locks doesn't necessarily mean higher maintenance cost.  More complexity
definitely means more maintenance, but additional locks doesn't necessarily equate
to increased complexity.

Lockless algorithms are almost always more difficult to reason about, i.e. trying
to use a sequence counter for this case would be more complex than using a per-vCPU
mutex.  The only complexity in adding another mutex is understanding why an additional
lock necessary, and IMO that's fairly easy to explain/understand (the shrinker will
almost never succeed if it has to wait for vcpu->mutex to be dropped).

> So unless it is necessary, we probably should choose a simple solution first.
> 
> In this case, I think we do have such a choice and since a similar
> mechanism has already been used by mmu_notifiers.

The mmu_notifier case is very different.  The invalidations affect the entire VM,
notifiers _must_ succeed, may or may not allowing sleeping, the readers (vCPUs)
effectively need protection while running in the guest, and practically speaking
holding a per-VM (or global) lock of any kind while a vCPU is running in the guest
is not viable, e.g. even holding kvm->srcu is disallowed.

In other words, using a traditional locking scheme to serialize guest accesses
with host-initiated page table (or memslot) updates is simply not an option.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
  2023-01-04  0:25       ` Vipin Sharma
@ 2023-01-18 17:43         ` Sean Christopherson
  0 siblings, 0 replies; 47+ messages in thread
From: Sean Christopherson @ 2023-01-18 17:43 UTC (permalink / raw)
  To: Vipin Sharma; +Cc: David Matlack, pbonzini, bgardon, kvm, linux-kernel

@all, trim your replies!

On Tue, Jan 03, 2023, Vipin Sharma wrote:
> On Tue, Jan 3, 2023 at 10:01 AM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > On Thu, Dec 29, 2022 at 1:55 PM David Matlack <dmatlack@google.com> wrote:
> > > > @@ -6646,66 +6690,49 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> > > >  static unsigned long
> > > >  mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > >  {
> > > > -     struct kvm *kvm;
> > > > -     int nr_to_scan = sc->nr_to_scan;
> > > > +     struct kvm_mmu_memory_cache *cache;
> > > > +     struct kvm *kvm, *first_kvm = NULL;
> > > >       unsigned long freed = 0;
> > > > +     /* spinlock for memory cache */
> > > > +     spinlock_t *cache_lock;
> > > > +     struct kvm_vcpu *vcpu;
> > > > +     unsigned long i;
> > > >
> > > >       mutex_lock(&kvm_lock);
> > > >
> > > >       list_for_each_entry(kvm, &vm_list, vm_list) {
> > > > -             int idx;
> > > > -             LIST_HEAD(invalid_list);
> > > > -
> > > > -             /*
> > > > -              * Never scan more than sc->nr_to_scan VM instances.
> > > > -              * Will not hit this condition practically since we do not try
> > > > -              * to shrink more than one VM and it is very unlikely to see
> > > > -              * !n_used_mmu_pages so many times.
> > > > -              */
> > > > -             if (!nr_to_scan--)
> > > > +             if (first_kvm == kvm)
> > > >                       break;
> > > > -             /*
> > > > -              * n_used_mmu_pages is accessed without holding kvm->mmu_lock
> > > > -              * here. We may skip a VM instance errorneosly, but we do not
> > > > -              * want to shrink a VM that only started to populate its MMU
> > > > -              * anyway.
> > > > -              */
> > > > -             if (!kvm->arch.n_used_mmu_pages &&
> > > > -                 !kvm_has_zapped_obsolete_pages(kvm))
> > > > -                     continue;
> > > > +             if (!first_kvm)
> > > > +                     first_kvm = kvm;
> > > > +             list_move_tail(&kvm->vm_list, &vm_list);
> > > >
> > > > -             idx = srcu_read_lock(&kvm->srcu);
> > > > -             write_lock(&kvm->mmu_lock);
> > > > +             kvm_for_each_vcpu(i, vcpu, kvm) {
> > >
> > > What protects this from racing with vCPU creation/deletion?
> > >
> 
> vCPU deletion:
> We take kvm_lock in mmu_shrink_scan(), the same lock is taken in
> kvm_destroy_vm() to remove a vm from vm_list. So, once we are
> iterating vm_list we will not see any VM removal which will means no
> vcpu removal.
> 
> I didn't find any other code for vCPU deletion except failures during
> VM and VCPU set up. A VM is only added to vm_list after successful
> creation.

Yep, KVM doesn't support destroying/freeing a vCPU after it's been added.

> vCPU creation:
> I think it will work.
> 
> kvm_vm_ioctl_create_vcpus() initializes the vcpu, adds it to
> kvm->vcpu_array which is of the type xarray and is managed by RCU.
> After this online_vcpus is incremented. So, kvm_for_each_vcpu() which
> uses RCU to read entries, if it sees incremented online_vcpus value
> then it will also sees all of the vcpu initialization.

Yep.  The shrinker may race with a vCPU creation, e.g. not process a just-created
vCPU, but that's totally ok in this case since the shrinker path is best effort
(and purging the caches of a newly created vCPU is likely pointless).

> @Sean, Paolo
> 
> Is the above explanation correct, kvm_for_each_vcpu() is safe without any lock?

Well, in this case, you do need to hold kvm_lock ;-)

But yes, iterating over vCPUs without holding the per-VM kvm->lock is safe, the
caller just needs to ensure the VM can't be destroyed, i.e. either needs to hold
a reference to the VM or needs to hold kvm_lock.

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2023-01-18 17:45 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-22  2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
2022-12-22  2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
2022-12-27 18:37   ` Ben Gardon
2022-12-28 22:07     ` Vipin Sharma
2022-12-29 21:15       ` David Matlack
2023-01-03 17:38         ` Vipin Sharma
2022-12-29 21:54   ` David Matlack
2023-01-03 18:01     ` Vipin Sharma
2023-01-04  0:25       ` Vipin Sharma
2023-01-18 17:43         ` Sean Christopherson
2023-01-03 19:32   ` Mingwei Zhang
2023-01-04  1:00     ` Vipin Sharma
2023-01-04  6:29       ` Mingwei Zhang
2023-01-04  6:57         ` Mingwei Zhang
2023-01-18 17:36         ` Sean Christopherson
2023-01-16  4:14   ` kernel test robot
2022-12-22  2:34 ` [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{} Vipin Sharma
2022-12-29 21:59   ` David Matlack
2022-12-22  2:34 ` [Patch v3 3/9] KVM: x86/mmu: Shrink split_shadow_page_cache via KVM MMU shrinker Vipin Sharma
2022-12-22  2:34 ` [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware Vipin Sharma
2022-12-29 22:05   ` David Matlack
2022-12-22  2:34 ` [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split Vipin Sharma
2022-12-27 19:02   ` Ben Gardon
2022-12-28 22:07     ` Vipin Sharma
2022-12-29 22:30   ` David Matlack
2023-01-03 18:26     ` Vipin Sharma
2022-12-22  2:34 ` [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{} Vipin Sharma
2022-12-27 19:09   ` Ben Gardon
2022-12-28 22:07     ` Vipin Sharma
2022-12-29 18:22       ` Ben Gardon
2023-01-03 17:36         ` Vipin Sharma
2022-12-29 23:08   ` David Matlack
2022-12-29 23:11     ` David Matlack
2023-01-03 18:45       ` Vipin Sharma
2023-01-03 18:55         ` David Matlack
2022-12-22  2:34 ` [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages Vipin Sharma
2022-12-27 19:34   ` Ben Gardon
2022-12-28 22:08     ` Vipin Sharma
2022-12-29 18:20       ` Ben Gardon
2022-12-22  2:34 ` [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware Vipin Sharma
2022-12-27 19:42   ` Ben Gardon
2022-12-28 22:08     ` Vipin Sharma
2022-12-29 23:18   ` David Matlack
2023-01-03 18:49     ` Vipin Sharma
2022-12-22  2:34 ` [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL Vipin Sharma
2022-12-27 19:52   ` Ben Gardon
2022-12-28 22:08     ` Vipin Sharma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.