All of lore.kernel.org
 help / color / mirror / Atom feed
From: Steven Price <steven.price@arm.com>
To: Peter Collingbourne <pcc@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu
Cc: kvm@vger.kernel.org, Catalin Marinas <catalin.marinas@arm.com>,
	Cornelia Huck <cohuck@redhat.com>, Marc Zyngier <maz@kernel.org>,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Will Deacon <will@kernel.org>,
	Evgenii Stepanov <eugenis@google.com>
Subject: Re: [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic
Date: Fri, 2 Sep 2022 15:47:30 +0100	[thread overview]
Message-ID: <54b979fc-5cb3-6eb4-47d4-e07e99359db9@arm.com> (raw)
In-Reply-To: <20220810193033.1090251-3-pcc@google.com>

On 10/08/2022 20:30, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Currently sanitise_mte_tags() checks if it's an online page before
> attempting to sanitise the tags. Such detection should be done in the
> caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does
> not have the vma, leave the page unmapped if not already tagged. Tag
> initialisation will be done on a subsequent access fault in
> user_mem_abort().

Looks correct to me.

Reviewed-by: Steven Price <steven.price@arm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 40 +++++++++++++++-------------------------
>  1 file changed, 15 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9012707f69c..1a3707aeb41f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
>   * - mmap_lock protects between a VM faulting a page in and the VMM performing
>   *   an mprotect() to add VM_MTE
>   */
> -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> -			     unsigned long size)
> +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> +			      unsigned long size)
>  {
>  	unsigned long i, nr_pages = size >> PAGE_SHIFT;
> -	struct page *page;
> +	struct page *page = pfn_to_page(pfn);
>  
>  	if (!kvm_has_mte(kvm))
> -		return 0;
> -
> -	/*
> -	 * pfn_to_online_page() is used to reject ZONE_DEVICE pages
> -	 * that may not support tags.
> -	 */
> -	page = pfn_to_online_page(pfn);
> -
> -	if (!page)
> -		return -EFAULT;
> +		return;
>  
>  	for (i = 0; i < nr_pages; i++, page++) {
>  		if (!page_mte_tagged(page)) {
> @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  			set_page_mte_tagged(page);
>  		}
>  	}
> -
> -	return 0;
>  }
>  
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	bool write_fault, writable, force_pte = false;
>  	bool exec_fault;
>  	bool device = false;
> -	bool shared;
>  	unsigned long mmu_seq;
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = get_vma_page_shift(vma, hva);
>  	}
>  
> -	shared = (vma->vm_flags & VM_SHARED);
> -
>  	switch (vma_shift) {
>  #ifndef __PAGETABLE_PMD_FOLDED
>  	case PUD_SHIFT:
> @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
>  		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
> -		if (!shared)
> -			ret = sanitise_mte_tags(kvm, pfn, vma_pagesize);
> -		else
> +		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
> +		    !(vma->vm_flags & VM_SHARED)) {
> +			sanitise_mte_tags(kvm, pfn, vma_pagesize);
> +		} else {
>  			ret = -EFAULT;
> -		if (ret)
>  			goto out_unlock;
> +		}
>  	}
>  
>  	if (writable)
> @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
>  	kvm_pfn_t pfn = pte_pfn(range->pte);
> -	int ret;
>  
>  	if (!kvm->arch.mmu.pgt)
>  		return false;
>  
>  	WARN_ON(range->end - range->start != 1);
>  
> -	ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE);
> -	if (ret)
> +	/*
> +	 * If the page isn't tagged, defer to user_mem_abort() for sanitising
> +	 * the MTE tags. The S2 pte should have been unmapped by
> +	 * mmu_notifier_invalidate_range_end().
> +	 */
> +	if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn)))
>  		return false;
>  
>  	/*

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Steven Price <steven.price@arm.com>
To: Peter Collingbourne <pcc@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Cornelia Huck <cohuck@redhat.com>, Will Deacon <will@kernel.org>,
	Marc Zyngier <maz@kernel.org>,
	Evgenii Stepanov <eugenis@google.com>,
	kvm@vger.kernel.org,
	Vincenzo Frascino <vincenzo.frascino@arm.com>
Subject: Re: [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic
Date: Fri, 2 Sep 2022 15:47:30 +0100	[thread overview]
Message-ID: <54b979fc-5cb3-6eb4-47d4-e07e99359db9@arm.com> (raw)
In-Reply-To: <20220810193033.1090251-3-pcc@google.com>

On 10/08/2022 20:30, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Currently sanitise_mte_tags() checks if it's an online page before
> attempting to sanitise the tags. Such detection should be done in the
> caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does
> not have the vma, leave the page unmapped if not already tagged. Tag
> initialisation will be done on a subsequent access fault in
> user_mem_abort().

Looks correct to me.

Reviewed-by: Steven Price <steven.price@arm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 40 +++++++++++++++-------------------------
>  1 file changed, 15 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9012707f69c..1a3707aeb41f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
>   * - mmap_lock protects between a VM faulting a page in and the VMM performing
>   *   an mprotect() to add VM_MTE
>   */
> -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> -			     unsigned long size)
> +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> +			      unsigned long size)
>  {
>  	unsigned long i, nr_pages = size >> PAGE_SHIFT;
> -	struct page *page;
> +	struct page *page = pfn_to_page(pfn);
>  
>  	if (!kvm_has_mte(kvm))
> -		return 0;
> -
> -	/*
> -	 * pfn_to_online_page() is used to reject ZONE_DEVICE pages
> -	 * that may not support tags.
> -	 */
> -	page = pfn_to_online_page(pfn);
> -
> -	if (!page)
> -		return -EFAULT;
> +		return;
>  
>  	for (i = 0; i < nr_pages; i++, page++) {
>  		if (!page_mte_tagged(page)) {
> @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  			set_page_mte_tagged(page);
>  		}
>  	}
> -
> -	return 0;
>  }
>  
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	bool write_fault, writable, force_pte = false;
>  	bool exec_fault;
>  	bool device = false;
> -	bool shared;
>  	unsigned long mmu_seq;
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = get_vma_page_shift(vma, hva);
>  	}
>  
> -	shared = (vma->vm_flags & VM_SHARED);
> -
>  	switch (vma_shift) {
>  #ifndef __PAGETABLE_PMD_FOLDED
>  	case PUD_SHIFT:
> @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
>  		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
> -		if (!shared)
> -			ret = sanitise_mte_tags(kvm, pfn, vma_pagesize);
> -		else
> +		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
> +		    !(vma->vm_flags & VM_SHARED)) {
> +			sanitise_mte_tags(kvm, pfn, vma_pagesize);
> +		} else {
>  			ret = -EFAULT;
> -		if (ret)
>  			goto out_unlock;
> +		}
>  	}
>  
>  	if (writable)
> @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
>  	kvm_pfn_t pfn = pte_pfn(range->pte);
> -	int ret;
>  
>  	if (!kvm->arch.mmu.pgt)
>  		return false;
>  
>  	WARN_ON(range->end - range->start != 1);
>  
> -	ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE);
> -	if (ret)
> +	/*
> +	 * If the page isn't tagged, defer to user_mem_abort() for sanitising
> +	 * the MTE tags. The S2 pte should have been unmapped by
> +	 * mmu_notifier_invalidate_range_end().
> +	 */
> +	if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn)))
>  		return false;
>  
>  	/*


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Steven Price <steven.price@arm.com>
To: Peter Collingbourne <pcc@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Cornelia Huck <cohuck@redhat.com>, Will Deacon <will@kernel.org>,
	Marc Zyngier <maz@kernel.org>,
	Evgenii Stepanov <eugenis@google.com>,
	kvm@vger.kernel.org,
	Vincenzo Frascino <vincenzo.frascino@arm.com>
Subject: Re: [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic
Date: Fri, 2 Sep 2022 15:47:30 +0100	[thread overview]
Message-ID: <54b979fc-5cb3-6eb4-47d4-e07e99359db9@arm.com> (raw)
In-Reply-To: <20220810193033.1090251-3-pcc@google.com>

On 10/08/2022 20:30, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Currently sanitise_mte_tags() checks if it's an online page before
> attempting to sanitise the tags. Such detection should be done in the
> caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does
> not have the vma, leave the page unmapped if not already tagged. Tag
> initialisation will be done on a subsequent access fault in
> user_mem_abort().

Looks correct to me.

Reviewed-by: Steven Price <steven.price@arm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 40 +++++++++++++++-------------------------
>  1 file changed, 15 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9012707f69c..1a3707aeb41f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
>   * - mmap_lock protects between a VM faulting a page in and the VMM performing
>   *   an mprotect() to add VM_MTE
>   */
> -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> -			     unsigned long size)
> +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> +			      unsigned long size)
>  {
>  	unsigned long i, nr_pages = size >> PAGE_SHIFT;
> -	struct page *page;
> +	struct page *page = pfn_to_page(pfn);
>  
>  	if (!kvm_has_mte(kvm))
> -		return 0;
> -
> -	/*
> -	 * pfn_to_online_page() is used to reject ZONE_DEVICE pages
> -	 * that may not support tags.
> -	 */
> -	page = pfn_to_online_page(pfn);
> -
> -	if (!page)
> -		return -EFAULT;
> +		return;
>  
>  	for (i = 0; i < nr_pages; i++, page++) {
>  		if (!page_mte_tagged(page)) {
> @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  			set_page_mte_tagged(page);
>  		}
>  	}
> -
> -	return 0;
>  }
>  
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	bool write_fault, writable, force_pte = false;
>  	bool exec_fault;
>  	bool device = false;
> -	bool shared;
>  	unsigned long mmu_seq;
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = get_vma_page_shift(vma, hva);
>  	}
>  
> -	shared = (vma->vm_flags & VM_SHARED);
> -
>  	switch (vma_shift) {
>  #ifndef __PAGETABLE_PMD_FOLDED
>  	case PUD_SHIFT:
> @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
>  		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
> -		if (!shared)
> -			ret = sanitise_mte_tags(kvm, pfn, vma_pagesize);
> -		else
> +		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
> +		    !(vma->vm_flags & VM_SHARED)) {
> +			sanitise_mte_tags(kvm, pfn, vma_pagesize);
> +		} else {
>  			ret = -EFAULT;
> -		if (ret)
>  			goto out_unlock;
> +		}
>  	}
>  
>  	if (writable)
> @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
>  	kvm_pfn_t pfn = pte_pfn(range->pte);
> -	int ret;
>  
>  	if (!kvm->arch.mmu.pgt)
>  		return false;
>  
>  	WARN_ON(range->end - range->start != 1);
>  
> -	ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE);
> -	if (ret)
> +	/*
> +	 * If the page isn't tagged, defer to user_mem_abort() for sanitising
> +	 * the MTE tags. The S2 pte should have been unmapped by
> +	 * mmu_notifier_invalidate_range_end().
> +	 */
> +	if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn)))
>  		return false;
>  
>  	/*


  reply	other threads:[~2022-09-02 14:47 UTC|newest]

Thread overview: 103+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
2022-08-10 19:30 ` Peter Collingbourne
2022-08-10 19:30 ` Peter Collingbourne
2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-09-01 15:49   ` Catalin Marinas
2022-09-01 15:49     ` Catalin Marinas
2022-09-01 15:49     ` Catalin Marinas
2022-09-02 10:26   ` Cornelia Huck
2022-09-02 10:26     ` Cornelia Huck
2022-09-02 10:26     ` Cornelia Huck
2022-09-02 14:47   ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-08-10 19:30 ` [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-09-02 14:47   ` Steven Price [this message]
2022-09-02 14:47     ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-08-10 19:30 ` [PATCH v3 3/7] mm: Add PG_arch_3 page flag Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-11  7:16   ` kernel test robot
2022-08-11  7:16     ` kernel test robot
2022-08-11  7:16     ` kernel test robot
2022-09-01 17:59     ` Catalin Marinas
2022-09-01 17:59       ` Catalin Marinas
2022-09-01 17:59       ` Catalin Marinas
2022-09-01 17:59       ` Catalin Marinas
2022-09-05 17:01       ` Catalin Marinas
2022-09-05 17:01         ` Catalin Marinas
2022-09-05 17:01         ` Catalin Marinas
2022-09-05 17:01         ` Catalin Marinas
2022-09-19 18:12         ` Marc Zyngier
2022-09-19 18:12           ` Marc Zyngier
2022-09-19 18:12           ` Marc Zyngier
2022-09-19 18:12           ` Marc Zyngier
2022-09-20 15:39           ` Catalin Marinas
2022-09-20 15:39             ` Catalin Marinas
2022-09-20 15:39             ` Catalin Marinas
2022-09-20 15:39             ` Catalin Marinas
2022-09-20 16:33             ` Marc Zyngier
2022-09-20 16:33               ` Marc Zyngier
2022-09-20 16:33               ` Marc Zyngier
2022-09-20 16:33               ` Marc Zyngier
2022-09-20 16:58               ` Catalin Marinas
2022-09-20 16:58                 ` Catalin Marinas
2022-09-20 16:58                 ` Catalin Marinas
2022-09-20 16:58                 ` Catalin Marinas
2022-09-21  3:53                 ` Peter Collingbourne
2022-09-21  3:53                   ` Peter Collingbourne
2022-09-21  3:53                   ` Peter Collingbourne
2022-09-21  3:53                   ` Peter Collingbourne
2022-08-10 19:30 ` [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-09-02 14:47   ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-09-02 16:28     ` Catalin Marinas
2022-09-02 16:28       ` Catalin Marinas
2022-09-02 16:28       ` Catalin Marinas
2022-09-02 16:58       ` Catalin Marinas
2022-09-02 16:58         ` Catalin Marinas
2022-09-02 16:58         ` Catalin Marinas
2022-09-05  7:37         ` Steven Price
2022-09-05  7:37           ` Steven Price
2022-09-05  7:37           ` Steven Price
2022-08-10 19:30 ` [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-09-02 13:41   ` Catalin Marinas
2022-09-02 13:41     ` Catalin Marinas
2022-09-02 13:41     ` Catalin Marinas
2022-09-02 14:47   ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-09-02 14:47     ` Steven Price
2022-08-10 19:30 ` [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-09-02 13:45   ` Catalin Marinas
2022-09-02 13:45     ` Catalin Marinas
2022-09-02 13:45     ` Catalin Marinas
2022-09-02 14:47     ` Steven Price
2022-09-02 14:47       ` Steven Price
2022-09-02 14:47       ` Steven Price
2022-09-12 16:23   ` Marc Zyngier
2022-09-12 16:23     ` Marc Zyngier
2022-09-12 16:23     ` Marc Zyngier
2022-09-13  4:10     ` Peter Collingbourne
2022-09-13  4:10       ` Peter Collingbourne
2022-09-13  4:10       ` Peter Collingbourne
2022-08-10 19:30 ` [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-08-10 19:30   ` Peter Collingbourne
2022-09-02 13:49   ` Catalin Marinas
2022-09-02 13:49     ` Catalin Marinas
2022-09-02 13:49     ` Catalin Marinas
2022-09-02 14:05 ` [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Catalin Marinas
2022-09-02 14:05   ` Catalin Marinas
2022-09-02 14:05   ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54b979fc-5cb3-6eb4-47d4-e07e99359db9@arm.com \
    --to=steven.price@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=cohuck@redhat.com \
    --cc=eugenis@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=pcc@google.com \
    --cc=vincenzo.frascino@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.