linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yuan Yao <yuan.yao@linux.intel.com>
To: Chao Peng <chao.p.peng@linux.intel.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-api@vger.kernel.org,
	linux-doc@vger.kernel.org, qemu-devel@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>,
	Hugh Dickins <hughd@google.com>, Jeff Layton <jlayton@kernel.org>,
	"J . Bruce Fields" <bfields@fieldses.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Shuah Khan <shuah@kernel.org>, Mike Rapoport <rppt@kernel.org>,
	Steven Price <steven.price@arm.com>,
	"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
	Vlastimil Babka <vbabka@suse.cz>,
	Vishal Annapurve <vannapurve@google.com>,
	Yu Zhang <yu.c.zhang@linux.intel.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com,
	ak@linux.intel.com, david@redhat.com, aarcange@redhat.com,
	ddutile@redhat.com, dhildenb@redhat.com,
	Quentin Perret <qperret@google.com>,
	tabba@google.com, Michael Roth <michael.roth@amd.com>,
	mhocko@suse.com, Muchun Song <songmuchun@bytedance.com>,
	wei.w.wang@intel.com
Subject: Re: [PATCH v9 5/8] KVM: Register/unregister the guest private memory regions
Date: Wed, 9 Nov 2022 13:52:32 +0800	[thread overview]
Message-ID: <20221109055232.ufj6tgv52p2pxsmr@yy-desk-7060> (raw)
In-Reply-To: <20221108094141.GA84958@chaop.bj.intel.com>

On Tue, Nov 08, 2022 at 05:41:41PM +0800, Chao Peng wrote:
> On Tue, Nov 08, 2022 at 09:35:06AM +0800, Yuan Yao wrote:
> > On Tue, Oct 25, 2022 at 11:13:41PM +0800, Chao Peng wrote:
> > > Introduce generic private memory register/unregister by reusing existing
> > > SEV ioctls KVM_MEMORY_ENCRYPT_{UN,}REG_REGION. It differs from SEV case
> > > by treating address in the region as gpa instead of hva. Which cases
> > > should these ioctls go is determined by the kvm_arch_has_private_mem().
> > > Architecture which supports KVM_PRIVATE_MEM should override this function.
> > >
> > > KVM internally defaults all guest memory as private memory and maintain
> > > the shared memory in 'mem_attr_array'. The above ioctls operate on this
> > > field and unmap existing mappings if any.
> > >
> > > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> > > ---
> > >  Documentation/virt/kvm/api.rst |  17 ++-
> > >  arch/x86/kvm/Kconfig           |   1 +
> > >  include/linux/kvm_host.h       |  10 +-
> > >  virt/kvm/Kconfig               |   4 +
> > >  virt/kvm/kvm_main.c            | 227 +++++++++++++++++++++++++--------
> > >  5 files changed, 198 insertions(+), 61 deletions(-)
> > >
> > > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> > > index 975688912b8c..08253cf498d1 100644
> > > --- a/Documentation/virt/kvm/api.rst
> > > +++ b/Documentation/virt/kvm/api.rst
> > > @@ -4717,10 +4717,19 @@ Documentation/virt/kvm/x86/amd-memory-encryption.rst.
> > >  This ioctl can be used to register a guest memory region which may
> > >  contain encrypted data (e.g. guest RAM, SMRAM etc).
> > >
> > > -It is used in the SEV-enabled guest. When encryption is enabled, a guest
> > > -memory region may contain encrypted data. The SEV memory encryption
> > > -engine uses a tweak such that two identical plaintext pages, each at
> > > -different locations will have differing ciphertexts. So swapping or
> > > +Currently this ioctl supports registering memory regions for two usages:
> > > +private memory and SEV-encrypted memory.
> > > +
> > > +When private memory is enabled, this ioctl is used to register guest private
> > > +memory region and the addr/size of kvm_enc_region represents guest physical
> > > +address (GPA). In this usage, this ioctl zaps the existing guest memory
> > > +mappings in KVM that fallen into the region.
> > > +
> > > +When SEV-encrypted memory is enabled, this ioctl is used to register guest
> > > +memory region which may contain encrypted data for a SEV-enabled guest. The
> > > +addr/size of kvm_enc_region represents userspace address (HVA). The SEV
> > > +memory encryption engine uses a tweak such that two identical plaintext pages,
> > > +each at different locations will have differing ciphertexts. So swapping or
> > >  moving ciphertext of those pages will not result in plaintext being
> > >  swapped. So relocating (or migrating) physical backing pages for the SEV
> > >  guest will require some additional steps.
> > > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > > index 8d2bd455c0cd..73fdfa429b20 100644
> > > --- a/arch/x86/kvm/Kconfig
> > > +++ b/arch/x86/kvm/Kconfig
> > > @@ -51,6 +51,7 @@ config KVM
> > >  	select HAVE_KVM_PM_NOTIFIER if PM
> > >  	select HAVE_KVM_RESTRICTED_MEM if X86_64
> > >  	select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM
> > > +	select KVM_GENERIC_PRIVATE_MEM if HAVE_KVM_RESTRICTED_MEM
> > >  	help
> > >  	  Support hosting fully virtualized guest machines using hardware
> > >  	  virtualization extensions.  You will need a fairly recent
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index 79e5cbc35fcf..4ce98fa0153c 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -245,7 +245,8 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> > >  int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu);
> > >  #endif
> > >
> > > -#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > > +
> > > +#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_KVM_GENERIC_PRIVATE_MEM)
> > >  struct kvm_gfn_range {
> > >  	struct kvm_memory_slot *slot;
> > >  	gfn_t start;
> > > @@ -254,6 +255,9 @@ struct kvm_gfn_range {
> > >  	bool may_block;
> > >  };
> > >  bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
> > > +#endif
> > > +
> > > +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > >  bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
> > >  bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
> > >  bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range);
> > > @@ -794,6 +798,9 @@ struct kvm {
> > >  	struct notifier_block pm_notifier;
> > >  #endif
> > >  	char stats_id[KVM_STATS_NAME_SIZE];
> > > +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> > > +	struct xarray mem_attr_array;
> > > +#endif
> > >  };
> > >
> > >  #define kvm_err(fmt, ...) \
> > > @@ -1453,6 +1460,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu);
> > >  int kvm_arch_post_init_vm(struct kvm *kvm);
> > >  void kvm_arch_pre_destroy_vm(struct kvm *kvm);
> > >  int kvm_arch_create_vm_debugfs(struct kvm *kvm);
> > > +bool kvm_arch_has_private_mem(struct kvm *kvm);
> > >
> > >  #ifndef __KVM_HAVE_ARCH_VM_ALLOC
> > >  /*
> > > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> > > index 9ff164c7e0cc..69ca59e82149 100644
> > > --- a/virt/kvm/Kconfig
> > > +++ b/virt/kvm/Kconfig
> > > @@ -89,3 +89,7 @@ config HAVE_KVM_PM_NOTIFIER
> > >
> > >  config HAVE_KVM_RESTRICTED_MEM
> > >         bool
> > > +
> > > +config KVM_GENERIC_PRIVATE_MEM
> > > +       bool
> > > +       depends on HAVE_KVM_RESTRICTED_MEM
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index 09c9cdeb773c..fc3835826ace 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -520,6 +520,62 @@ void kvm_destroy_vcpus(struct kvm *kvm)
> > >  }
> > >  EXPORT_SYMBOL_GPL(kvm_destroy_vcpus);
> > >
> > > +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start,
> > > +							    gfn_t end)
> > > +{
> > > +	if (likely(kvm->mmu_invalidate_in_progress == 1)) {
> > > +		kvm->mmu_invalidate_range_start = start;
> > > +		kvm->mmu_invalidate_range_end = end;
> > > +	} else {
> > > +		/*
> > > +		 * Fully tracking multiple concurrent ranges has diminishing
> > > +		 * returns. Keep things simple and just find the minimal range
> > > +		 * which includes the current and new ranges. As there won't be
> > > +		 * enough information to subtract a range after its invalidate
> > > +		 * completes, any ranges invalidated concurrently will
> > > +		 * accumulate and persist until all outstanding invalidates
> > > +		 * complete.
> > > +		 */
> > > +		kvm->mmu_invalidate_range_start =
> > > +			min(kvm->mmu_invalidate_range_start, start);
> > > +		kvm->mmu_invalidate_range_end =
> > > +			max(kvm->mmu_invalidate_range_end, end);
> > > +	}
> > > +}
> > > +
> > > +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end)
> > > +{
> > > +	/*
> > > +	 * The count increase must become visible at unlock time as no
> > > +	 * spte can be established without taking the mmu_lock and
> > > +	 * count is also read inside the mmu_lock critical section.
> > > +	 */
> > > +	kvm->mmu_invalidate_in_progress++;
> > > +}
> > > +
> > > +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end)
> > > +{
> > > +	mark_invalidate_in_progress(kvm, start, end);
> > > +	update_invalidate_range(kvm, start, end);
> > > +}
> > > +
> > > +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end)
> > > +{
> > > +	/*
> > > +	 * This sequence increase will notify the kvm page fault that
> > > +	 * the page that is going to be mapped in the spte could have
> > > +	 * been freed.
> > > +	 */
> > > +	kvm->mmu_invalidate_seq++;
> > > +	smp_wmb();
> > > +	/*
> > > +	 * The above sequence increase must be visible before the
> > > +	 * below count decrease, which is ensured by the smp_wmb above
> > > +	 * in conjunction with the smp_rmb in mmu_invalidate_retry().
> > > +	 */
> > > +	kvm->mmu_invalidate_in_progress--;
> > > +}
> > > +
> > >  #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
> > >  static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
> > >  {
> > > @@ -715,51 +771,12 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
> > >  	kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
> > >  }
> > >
> > > -static inline void update_invalidate_range(struct kvm *kvm, gfn_t start,
> > > -							    gfn_t end)
> > > -{
> > > -	if (likely(kvm->mmu_invalidate_in_progress == 1)) {
> > > -		kvm->mmu_invalidate_range_start = start;
> > > -		kvm->mmu_invalidate_range_end = end;
> > > -	} else {
> > > -		/*
> > > -		 * Fully tracking multiple concurrent ranges has diminishing
> > > -		 * returns. Keep things simple and just find the minimal range
> > > -		 * which includes the current and new ranges. As there won't be
> > > -		 * enough information to subtract a range after its invalidate
> > > -		 * completes, any ranges invalidated concurrently will
> > > -		 * accumulate and persist until all outstanding invalidates
> > > -		 * complete.
> > > -		 */
> > > -		kvm->mmu_invalidate_range_start =
> > > -			min(kvm->mmu_invalidate_range_start, start);
> > > -		kvm->mmu_invalidate_range_end =
> > > -			max(kvm->mmu_invalidate_range_end, end);
> > > -	}
> > > -}
> > > -
> > > -static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end)
> > > -{
> > > -	/*
> > > -	 * The count increase must become visible at unlock time as no
> > > -	 * spte can be established without taking the mmu_lock and
> > > -	 * count is also read inside the mmu_lock critical section.
> > > -	 */
> > > -	kvm->mmu_invalidate_in_progress++;
> > > -}
> > > -
> > >  static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> > >  {
> > >  	update_invalidate_range(kvm, range->start, range->end);
> > >  	return kvm_unmap_gfn_range(kvm, range);
> > >  }
> > >
> > > -void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end)
> > > -{
> > > -	mark_invalidate_in_progress(kvm, start, end);
> > > -	update_invalidate_range(kvm, start, end);
> > > -}
> > > -
> > >  static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> > >  					const struct mmu_notifier_range *range)
> > >  {
> > > @@ -807,23 +824,6 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> > >  	return 0;
> > >  }
> > >
> > > -void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end)
> > > -{
> > > -	/*
> > > -	 * This sequence increase will notify the kvm page fault that
> > > -	 * the page that is going to be mapped in the spte could have
> > > -	 * been freed.
> > > -	 */
> > > -	kvm->mmu_invalidate_seq++;
> > > -	smp_wmb();
> > > -	/*
> > > -	 * The above sequence increase must be visible before the
> > > -	 * below count decrease, which is ensured by the smp_wmb above
> > > -	 * in conjunction with the smp_rmb in mmu_invalidate_retry().
> > > -	 */
> > > -	kvm->mmu_invalidate_in_progress--;
> > > -}
> > > -
> > >  static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> > >  					const struct mmu_notifier_range *range)
> > >  {
> > > @@ -937,6 +937,89 @@ static int kvm_init_mmu_notifier(struct kvm *kvm)
> > >
> > >  #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */
> > >
> > > +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> > > +
> > > +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end)
> > > +{
> > > +	struct kvm_gfn_range gfn_range;
> > > +	struct kvm_memory_slot *slot;
> > > +	struct kvm_memslots *slots;
> > > +	struct kvm_memslot_iter iter;
> > > +	int i;
> > > +	int r = 0;
> > > +
> > > +	gfn_range.pte = __pte(0);
> > > +	gfn_range.may_block = true;
> > > +
> > > +	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> > > +		slots = __kvm_memslots(kvm, i);
> > > +
> > > +		kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) {
> > > +			slot = iter.slot;
> > > +			gfn_range.start = max(start, slot->base_gfn);
> > > +			gfn_range.end = min(end, slot->base_gfn + slot->npages);
> > > +			if (gfn_range.start >= gfn_range.end)
> > > +				continue;
> > > +			gfn_range.slot = slot;
> > > +
> > > +			r |= kvm_unmap_gfn_range(kvm, &gfn_range);
> > > +		}
> > > +	}
> > > +
> > > +	if (r)
> > > +		kvm_flush_remote_tlbs(kvm);
> > > +}
> > > +
> > > +#define KVM_MEM_ATTR_SHARED	0x0001
> > > +static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size,
> > > +				     bool is_private)
> > > +{
> > > +	gfn_t start, end;
> > > +	unsigned long i;
> > > +	void *entry;
> > > +	int idx;
> > > +	int r = 0;
> > > +
> > > +	if (size == 0 || gpa + size < gpa)
> > > +		return -EINVAL;
> > > +	if (gpa & (PAGE_SIZE - 1) || size & (PAGE_SIZE - 1))
> > > +		return -EINVAL;
> > > +
> > > +	start = gpa >> PAGE_SHIFT;
> > > +	end = (gpa + size - 1 + PAGE_SIZE) >> PAGE_SHIFT;
> > > +
> > > +	/*
> > > +	 * Guest memory defaults to private, kvm->mem_attr_array only stores
> > > +	 * shared memory.
> > > +	 */
> > > +	entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED);
> > > +
> > > +	idx = srcu_read_lock(&kvm->srcu);
> > > +	KVM_MMU_LOCK(kvm);
> > > +	kvm_mmu_invalidate_begin(kvm, start, end);
> > > +
> > > +	for (i = start; i < end; i++) {
> > > +		r = xa_err(xa_store(&kvm->mem_attr_array, i, entry,
> > > +				    GFP_KERNEL_ACCOUNT));
> > > +		if (r)
> > > +			goto err;
> > > +	}
> > > +
> > > +	kvm_unmap_mem_range(kvm, start, end);
> >
> > lock is hold by KVM_MMU_LOCK() so how about do
> > kvm_mmu_invalidate_begin() after changing xarray:
> >
> > kvm_mmu_invalidate_begin(kvm, start, end);
> > kvm_unmap_mem_range(kvm, start, end);
> > kvm_mmu_invalidate_end(kvm, start, end);
> >
> > Also the error handling path doesn't need to care it yet.
>
> The mem_attr_array is consumed in the page fault handler(i.e.
> kvm_mem_is_private() in patch 08) so it should also be protected by
> kvm_mmu_invalidate_begin/end(). E.g. if we change the mem_attr_arry here
> after the page fault handler has read the mem_attr_array, the
> mmu_invalidate_retry_gfn() should return 1 to let the page fault handler
> to retry the fault.

You're right!
Even the changes are undo by error handling path, we still need to
sure that user of mem_attr_arry retry the fault, due to the user may
get some "stale" data (they're "stale" becaus the xarray is recovered
in error case).

>
> >
> > > +
> > > +	goto ret;
> > > +err:
> > > +	for (; i > start; i--)
> > > +		xa_erase(&kvm->mem_attr_array, i);
> >
> > the start should be covered yet, consider the i is
> > unsigned long and case of start is 0, may need another
> > variable j for this.
>
> Ah, right!
>
> Thanks,
> Chao
> >
> > > +ret:
> > > +	kvm_mmu_invalidate_end(kvm, start, end);
> > > +	KVM_MMU_UNLOCK(kvm);
> > > +	srcu_read_unlock(&kvm->srcu, idx);
> > > +
> > > +	return r;
> > > +}
> > > +#endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */
> > > +
> > >  #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
> > >  static int kvm_pm_notifier_call(struct notifier_block *bl,
> > >  				unsigned long state,
> > > @@ -1165,6 +1248,9 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname)
> > >  	spin_lock_init(&kvm->mn_invalidate_lock);
> > >  	rcuwait_init(&kvm->mn_memslots_update_rcuwait);
> > >  	xa_init(&kvm->vcpu_array);
> > > +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> > > +	xa_init(&kvm->mem_attr_array);
> > > +#endif
> > >
> > >  	INIT_LIST_HEAD(&kvm->gpc_list);
> > >  	spin_lock_init(&kvm->gpc_lock);
> > > @@ -1338,6 +1424,9 @@ static void kvm_destroy_vm(struct kvm *kvm)
> > >  		kvm_free_memslots(kvm, &kvm->__memslots[i][0]);
> > >  		kvm_free_memslots(kvm, &kvm->__memslots[i][1]);
> > >  	}
> > > +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> > > +	xa_destroy(&kvm->mem_attr_array);
> > > +#endif
> > >  	cleanup_srcu_struct(&kvm->irq_srcu);
> > >  	cleanup_srcu_struct(&kvm->srcu);
> > >  	kvm_arch_free_vm(kvm);
> > > @@ -1541,6 +1630,11 @@ static void kvm_replace_memslot(struct kvm *kvm,
> > >  	}
> > >  }
> > >
> > > +bool __weak kvm_arch_has_private_mem(struct kvm *kvm)
> > > +{
> > > +	return false;
> > > +}
> > > +
> > >  static int check_memory_region_flags(const struct kvm_user_mem_region *mem)
> > >  {
> > >  	u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
> > > @@ -4708,6 +4802,24 @@ static long kvm_vm_ioctl(struct file *filp,
> > >  		r = kvm_vm_ioctl_set_memory_region(kvm, &mem);
> > >  		break;
> > >  	}
> > > +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> > > +	case KVM_MEMORY_ENCRYPT_REG_REGION:
> > > +	case KVM_MEMORY_ENCRYPT_UNREG_REGION: {
> > > +		struct kvm_enc_region region;
> > > +		bool set = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION;
> > > +
> > > +		if (!kvm_arch_has_private_mem(kvm))
> > > +			goto arch_vm_ioctl;
> > > +
> > > +		r = -EFAULT;
> > > +		if (copy_from_user(&region, argp, sizeof(region)))
> > > +			goto out;
> > > +
> > > +		r = kvm_vm_ioctl_set_mem_attr(kvm, region.addr,
> > > +					      region.size, set);
> > > +		break;
> > > +	}
> > > +#endif
> > >  	case KVM_GET_DIRTY_LOG: {
> > >  		struct kvm_dirty_log log;
> > >
> > > @@ -4861,6 +4973,9 @@ static long kvm_vm_ioctl(struct file *filp,
> > >  		r = kvm_vm_ioctl_get_stats_fd(kvm);
> > >  		break;
> > >  	default:
> > > +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> > > +arch_vm_ioctl:
> > > +#endif
> > >  		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
> > >  	}
> > >  out:
> > > --
> > > 2.25.1
> > >
> > >


  reply	other threads:[~2022-11-09  5:53 UTC|newest]

Thread overview: 101+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-25 15:13 [PATCH v9 0/8] KVM: mm: fd-based approach for supporting KVM Chao Peng
2022-10-25 15:13 ` [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory Chao Peng
2022-10-26 17:31   ` Isaku Yamahata
2022-10-28  6:12     ` Chao Peng
2022-10-27 10:20   ` Fuad Tabba
2022-10-31 17:47   ` Michael Roth
2022-11-01 11:37     ` Chao Peng
2022-11-01 15:19       ` Michael Roth
2022-11-01 19:30         ` Michael Roth
2022-11-02 14:53           ` Chao Peng
2022-11-02 21:19             ` Michael Roth
2022-11-14 14:02         ` Vlastimil Babka
2022-11-14 15:28           ` Kirill A. Shutemov
2022-11-14 22:16             ` Michael Roth
2022-11-15  9:48               ` Chao Peng
2022-11-14 22:16           ` Michael Roth
2022-11-02 21:14     ` Kirill A. Shutemov
2022-11-02 21:26       ` Michael Roth
2022-11-02 22:07       ` Michael Roth
2022-11-03 16:30         ` Kirill A. Shutemov
2022-11-29  0:06   ` Michael Roth
2022-11-29 11:21     ` Kirill A. Shutemov
2022-11-29 11:39       ` David Hildenbrand
2022-11-29 13:59         ` Chao Peng
2022-11-29 13:58       ` Chao Peng
2022-11-29  0:37   ` Michael Roth
2022-11-29 14:06     ` Chao Peng
2022-11-29 19:06       ` Michael Roth
2022-11-29 19:18         ` Michael Roth
2022-11-30  9:39           ` Chao Peng
2022-11-30 14:31             ` Michael Roth
2022-11-29 18:01     ` Vishal Annapurve
2022-12-02  2:16   ` Vishal Annapurve
2022-12-02  6:49     ` Chao Peng
2022-12-02 13:44       ` Kirill A . Shutemov
2022-10-25 15:13 ` [PATCH v9 2/8] KVM: Extend the memslot to support fd-based private memory Chao Peng
2022-10-27 10:25   ` Fuad Tabba
2022-10-28  7:04   ` Xiaoyao Li
2022-10-31 14:14     ` Chao Peng
2022-11-14 16:04   ` Alex Bennée
2022-11-15  9:29     ` Chao Peng
2022-10-25 15:13 ` [PATCH v9 3/8] KVM: Add KVM_EXIT_MEMORY_FAULT exit Chao Peng
2022-10-25 15:26   ` Peter Maydell
2022-10-25 16:17     ` Sean Christopherson
2022-10-27 10:27   ` Fuad Tabba
2022-10-28  6:14     ` Chao Peng
2022-11-15 16:56   ` Alex Bennée
2022-11-16  3:14     ` Chao Peng
2022-11-16 19:03       ` Alex Bennée
2022-11-17 13:45         ` Chao Peng
2022-11-17 15:08           ` Alex Bennée
2022-11-18  1:32             ` Chao Peng
2022-11-18 13:23               ` Alex Bennée
2022-11-18 15:59                 ` Sean Christopherson
2022-11-22  9:50                   ` Chao Peng
2022-11-23 18:02                     ` Sean Christopherson
2022-11-16 18:15   ` Andy Lutomirski
2022-11-16 18:48     ` Sean Christopherson
2022-11-17 13:42       ` Chao Peng
2022-10-25 15:13 ` [PATCH v9 4/8] KVM: Use gfn instead of hva for mmu_notifier_retry Chao Peng
2022-10-27 10:29   ` Fuad Tabba
2022-11-04  2:28     ` Chao Peng
2022-11-04 22:29       ` Sean Christopherson
2022-11-08  7:16         ` Chao Peng
2022-11-10 17:53           ` Sean Christopherson
2022-11-10 20:06   ` Sean Christopherson
2022-11-11  8:27     ` Chao Peng
2022-10-25 15:13 ` [PATCH v9 5/8] KVM: Register/unregister the guest private memory regions Chao Peng
2022-10-27 10:31   ` Fuad Tabba
2022-11-03 23:04   ` Sean Christopherson
2022-11-04  8:28     ` Chao Peng
2022-11-04 21:19       ` Sean Christopherson
2022-11-08  8:24         ` Chao Peng
2022-11-08  1:35   ` Yuan Yao
2022-11-08  9:41     ` Chao Peng
2022-11-09  5:52       ` Yuan Yao [this message]
2022-11-16 22:24   ` Sean Christopherson
2022-11-17 13:20     ` Chao Peng
2022-10-25 15:13 ` [PATCH v9 6/8] KVM: Update lpage info when private/shared memory are mixed Chao Peng
2022-10-26 20:46   ` Isaku Yamahata
2022-10-28  6:38     ` Chao Peng
2022-11-08 12:08   ` Yuan Yao
2022-11-09  4:13     ` Chao Peng
2022-10-25 15:13 ` [PATCH v9 7/8] KVM: Handle page fault for private memory Chao Peng
2022-10-26 21:54   ` Isaku Yamahata
2022-10-28  6:55     ` Chao Peng
2022-11-01  0:02       ` Isaku Yamahata
2022-11-01 11:38         ` Chao Peng
2022-11-16 20:50   ` Ackerley Tng
2022-11-16 22:13     ` Sean Christopherson
2022-11-17 13:25       ` Chao Peng
2022-10-25 15:13 ` [PATCH v9 8/8] KVM: Enable and expose KVM_MEM_PRIVATE Chao Peng
2022-10-27 10:31   ` Fuad Tabba
2022-11-03 12:13 ` [PATCH v9 0/8] KVM: mm: fd-based approach for supporting KVM Vishal Annapurve
2022-11-08  0:41   ` Isaku Yamahata
2022-11-09 15:54     ` Kirill A. Shutemov
2022-11-15 14:36       ` Kirill A. Shutemov
2022-11-14 11:43 ` Alex Bennée
2022-11-16  5:00   ` Chao Peng
2022-11-16  9:40     ` Alex Bennée
2022-11-17 14:16       ` Chao Peng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221109055232.ufj6tgv52p2pxsmr@yy-desk-7060 \
    --to=yuan.yao@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bfields@fieldses.org \
    --cc=bp@alien8.de \
    --cc=chao.p.peng@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@intel.com \
    --cc=david@redhat.com \
    --cc=ddutile@redhat.com \
    --cc=dhildenb@redhat.com \
    --cc=hpa@zytor.com \
    --cc=hughd@google.com \
    --cc=jlayton@kernel.org \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=jun.nakajima@intel.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mail@maciej.szmigiero.name \
    --cc=mhocko@suse.com \
    --cc=michael.roth@amd.com \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qperret@google.com \
    --cc=rppt@kernel.org \
    --cc=seanjc@google.com \
    --cc=shuah@kernel.org \
    --cc=songmuchun@bytedance.com \
    --cc=steven.price@arm.com \
    --cc=tabba@google.com \
    --cc=tglx@linutronix.de \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=wei.w.wang@intel.com \
    --cc=x86@kernel.org \
    --cc=yu.c.zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).