All of lore.kernel.org
 help / color / mirror / Atom feed
From: Zhi Wang <zhi.wang.linux@gmail.com>
To: Steven Price <steven.price@arm.com>
Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	James Morse <james.morse@arm.com>,
	Oliver Upton <oliver.upton@linux.dev>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Zenghui Yu <yuzenghui@huawei.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Christoffer Dall <christoffer.dall@arm.com>,
	Fuad Tabba <tabba@google.com>,
	linux-coco@lists.linux.dev
Subject: Re: [RFC PATCH 17/28] arm64: RME: Runtime faulting of memory
Date: Tue, 14 Mar 2023 18:41:23 +0200	[thread overview]
Message-ID: <20230314184123.000022ee@gmail.com> (raw)
In-Reply-To: <554bbe2d-ead5-187d-7460-a8c03f2528fa@arm.com>

On Fri, 10 Mar 2023 15:47:19 +0000
Steven Price <steven.price@arm.com> wrote:

> On 06/03/2023 18:20, Zhi Wang wrote:
> > On Fri, 27 Jan 2023 11:29:21 +0000
> > Steven Price <steven.price@arm.com> wrote:
> >   
> >> At runtime if the realm guest accesses memory which hasn't yet been
> >> mapped then KVM needs to either populate the region or fault the guest.
> >>
> >> For memory in the lower (protected) region of IPA a fresh page is
> >> provided to the RMM which will zero the contents. For memory in the
> >> upper (shared) region of IPA, the memory from the memslot is mapped
> >> into the realm VM non secure.
> >>
> >> Signed-off-by: Steven Price <steven.price@arm.com>
> >> ---
> >>  arch/arm64/include/asm/kvm_emulate.h | 10 +++++
> >>  arch/arm64/include/asm/kvm_rme.h     | 12 ++++++
> >>  arch/arm64/kvm/mmu.c                 | 64 +++++++++++++++++++++++++---
> >>  arch/arm64/kvm/rme.c                 | 48 +++++++++++++++++++++
> >>  4 files changed, 128 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> >> index 285e62914ca4..3a71b3d2e10a 100644
> >> --- a/arch/arm64/include/asm/kvm_emulate.h
> >> +++ b/arch/arm64/include/asm/kvm_emulate.h
> >> @@ -502,6 +502,16 @@ static inline enum realm_state kvm_realm_state(struct kvm *kvm)
> >>  	return READ_ONCE(kvm->arch.realm.state);
> >>  }
> >>  
> >> +static inline gpa_t kvm_gpa_stolen_bits(struct kvm *kvm)
> >> +{
> >> +	if (kvm_is_realm(kvm)) {
> >> +		struct realm *realm = &kvm->arch.realm;
> >> +
> >> +		return BIT(realm->ia_bits - 1);
> >> +	}
> >> +	return 0;
> >> +}
> >> +  
> > 
> > "stolen" seems a little bit vague. Maybe "shared" bit would be better as
> > SEV-SNP has C bit and TDX has shared bit. It would be nice to align with
> > the common knowledge.  
> 
> The Arm CCA term is the "protected" bit[1] - although the bit is
> backwards as it's cleared to indicate protected... so not ideal naming! ;)
> 
> But it's termed 'stolen' here as it's effectively removed from the set
> of value address bits. And this function is returning a mask of the bits
> that are not available as address bits. The naming was meant to be
> generic that this could encompass other features that need to reserve
> IPA bits.
> 
> But it's possible this is too generic and perhaps we should just deal
> with a single bit rather than potential masks. Alternatively we could
> invert this and return a set of valid bits:
> 
> static inline gpa_t kvm_gpa_valid_bits(struct kvm *kvm)
> {
> 	if (kvm_is_realm(kvm)) {
> 		struct realm *realm = &kvm->arch.realm;
> 
> 		return ~BIT(realm->ia_bits - 1);
> 	}
> 	return ~(gpa_t)0;
> }
> 
> That would at least match the current usage where the inverse is what we
> need.
> 
> So SEV-SNP or TDX have a concept of a mask to apply to addresses from
> the guest? Can we steal any existing terms?
> 

In a general level, they are using "shared"/"private". TDX is using a
function kvm_gfn_shared_mask() to get the mask and three other marcos to
apply the mask on GPA (IPA)[1]. SEV-SNP is re-using SME macros e.g.
__sme_clr() to apply the mask[2].

Guess we can take them as an reference: using an inline function to get
the protected_bit_mask, like kvm_ipa_protected_mask() with the spec text
you pasted in the comment of the function. The name echoes the spec
description.

Then another necessary functions like kvm_gpa_{is, to}_{shared, private}
, which applies the mask to a GPA(IPA), to echo the terms in the generic
KVM knowledge. (Guess we can refine that with realm_is_addr_protected()).

[1] https://www.spinics.net/lists/kernel/msg4718104.html
[2] https://lore.kernel.org/lkml/20230220183847.59159-25-michael.roth@amd.com/

> 
> [1] Technically the spec only states: "Software in a Realm should treat
> the most significant bit of an IPA as a protection attribute." I don't
> think the bit is directly referred to in the spec as anything other than
> "the most significant bit". Although that in itself is confusing as it
> is the most significant *active* bit (i.e the configured IPA size
> changes which bit is used).
> 
> > Also, it would be nice to change the name of gpa_stolen_mask accordingly.
> >   
> >>  static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
> >>  {
> >>  	if (static_branch_unlikely(&kvm_rme_is_available))
> >> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> >> index 9d1583c44a99..303e4a5e5704 100644
> >> --- a/arch/arm64/include/asm/kvm_rme.h
> >> +++ b/arch/arm64/include/asm/kvm_rme.h
> >> @@ -50,6 +50,18 @@ void kvm_destroy_rec(struct kvm_vcpu *vcpu);
> >>  int kvm_rec_enter(struct kvm_vcpu *vcpu);
> >>  int handle_rme_exit(struct kvm_vcpu *vcpu, int rec_run_status);
> >>  
> >> +void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size);
> >> +int realm_map_protected(struct realm *realm,
> >> +			unsigned long hva,
> >> +			unsigned long base_ipa,
> >> +			struct page *dst_page,
> >> +			unsigned long map_size,
> >> +			struct kvm_mmu_memory_cache *memcache);
> >> +int realm_map_non_secure(struct realm *realm,
> >> +			 unsigned long ipa,
> >> +			 struct page *page,
> >> +			 unsigned long map_size,
> >> +			 struct kvm_mmu_memory_cache *memcache);
> >>  int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> >>  			unsigned long addr, unsigned long end,
> >>  			unsigned long ripas);
> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> >> index f29558c5dcbc..5417c273861b 100644
> >> --- a/arch/arm64/kvm/mmu.c
> >> +++ b/arch/arm64/kvm/mmu.c
> >> @@ -235,8 +235,13 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
> >>  
> >>  	lockdep_assert_held_write(&kvm->mmu_lock);
> >>  	WARN_ON(size & ~PAGE_MASK);
> >> -	WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap,
> >> -				   may_block));
> >> +
> >> +	if (kvm_is_realm(kvm))
> >> +		kvm_realm_unmap_range(kvm, start, size);
> >> +	else
> >> +		WARN_ON(stage2_apply_range(kvm, start, end,
> >> +					   kvm_pgtable_stage2_unmap,
> >> +					   may_block));
> >>  }
> >>  
> >>  static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
> >> @@ -250,7 +255,11 @@ static void stage2_flush_memslot(struct kvm *kvm,
> >>  	phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
> >>  	phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
> >>  
> >> -	stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
> >> +	if (kvm_is_realm(kvm))
> >> +		kvm_realm_unmap_range(kvm, addr, end - addr);
> >> +	else
> >> +		stage2_apply_range_resched(kvm, addr, end,
> >> +					   kvm_pgtable_stage2_flush);
> >>  }
> >>  
> >>  /**
> >> @@ -818,6 +827,10 @@ void stage2_unmap_vm(struct kvm *kvm)
> >>  	struct kvm_memory_slot *memslot;
> >>  	int idx, bkt;
> >>  
> >> +	/* For realms this is handled by the RMM so nothing to do here */
> >> +	if (kvm_is_realm(kvm))
> >> +		return;
> >> +
> >>  	idx = srcu_read_lock(&kvm->srcu);
> >>  	mmap_read_lock(current->mm);
> >>  	write_lock(&kvm->mmu_lock);
> >> @@ -840,6 +853,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
> >>  	pgt = mmu->pgt;
> >>  	if (kvm_is_realm(kvm) &&
> >>  	    kvm_realm_state(kvm) != REALM_STATE_DYING) {
> >> +		unmap_stage2_range(mmu, 0, (~0ULL) & PAGE_MASK);
> >>  		write_unlock(&kvm->mmu_lock);
> >>  		kvm_realm_destroy_rtts(&kvm->arch.realm, pgt->ia_bits,
> >>  				       pgt->start_level);
> >> @@ -1190,6 +1204,24 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> >>  	return vma->vm_flags & VM_MTE_ALLOWED;
> >>  }
> >>  
> >> +static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa, unsigned long hva,
> >> +			 kvm_pfn_t pfn, unsigned long map_size,
> >> +			 enum kvm_pgtable_prot prot,
> >> +			 struct kvm_mmu_memory_cache *memcache)
> >> +{
> >> +	struct realm *realm = &kvm->arch.realm;
> >> +	struct page *page = pfn_to_page(pfn);
> >> +
> >> +	if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W)))
> >> +		return -EFAULT;
> >> +
> >> +	if (!realm_is_addr_protected(realm, ipa))
> >> +		return realm_map_non_secure(realm, ipa, page, map_size,
> >> +					    memcache);
> >> +
> >> +	return realm_map_protected(realm, hva, ipa, page, map_size, memcache);
> >> +}
> >> +
> >>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  			  struct kvm_memory_slot *memslot, unsigned long hva,
> >>  			  unsigned long fault_status)
> >> @@ -1210,9 +1242,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	unsigned long vma_pagesize, fault_granule;
> >>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
> >>  	struct kvm_pgtable *pgt;
> >> +	gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(vcpu->kvm);
> >>  
> >>  	fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
> >>  	write_fault = kvm_is_write_fault(vcpu);
> >> +
> >> +	/* Realms cannot map read-only */  
> > 
> > Out of curiosity, why? It would be nice to have more explanation in the
> > comment.  
> 
> The RMM specification doesn't support mapping protected memory read
> only. I don't believe there is any reason why it couldn't, but equally I
> don't think there any use cases for a guest needing read-only pages so
> this just isn't supported by the RMM. Since the page is necessarily
> taken away from the host it's fairly irrelevant (from the host's
> perspective) whether it is actually read only or not.
> 
> However, this is technically wrong for the case of unprotected (shared)
> pages - it should be possible to map those read only. But I need to have
> a think about how to fix that up.

If the fault IPA carries the protected bit, can't we do like:

if (vcpu_is_rec(vcpu) && fault_ipa_is_protected)
	write_fault = true

Are there still other gaps?
>
> >> +	if (vcpu_is_rec(vcpu))
> >> +		write_fault = true;
> >> +
> >>  	exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> >>  	VM_BUG_ON(write_fault && exec_fault);
> >>  
> >> @@ -1272,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
> >>  		fault_ipa &= ~(vma_pagesize - 1);
> >>  
> >> -	gfn = fault_ipa >> PAGE_SHIFT;
> >> +	gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT;
> >>  	mmap_read_unlock(current->mm);
> >>  
> >>  	/*
> >> @@ -1345,7 +1383,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	 * If we are not forced to use page mapping, check if we are
> >>  	 * backed by a THP and thus use block mapping if possible.
> >>  	 */
> >> -	if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
> >> +	/* FIXME: We shouldn't need to disable this for realms */
> >> +	if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) {  
> > 
> > Why do we have to disable this temporarily?  
> 
> The current uABI (not using memfd) has some serious issues regarding
> huge page support. KVM normally follows the user space mappings of the
> memslot - so if user space has a huge page (transparent or hugetlbs)
> then stage 2 for the guest also gets one.
> 
> However realms sometimes require that the stage 2 differs. The main
> examples are:
> 
>  * RIPAS - if part of a huge page is RIPAS_RAM and part RIPAS_EMPTY then
> the huge page would have to be split.
> 
>  * Initially populated memory: basically the same as above - if the
> populated memory doesn't perfectly align with huge pages, then the
> head/tail pages would need to be broken up.
> 
> Removing this hack allows the huge pages to be created in stage 2, but
> then causes overmapping of the initial contents, then later on when the
> VMM (or guest) attempts to change the properties of the misaligned tail
> it gets an error because the pages are already present in stage 2.
> 
> The planned solution to all this is to stop following the user space
> page tables and create huge pages opportunistically from the memfd that
> backs the protected range. For now this hack exists to avoid things
> "randomly" failing when e.g. the initial kernel image isn't huge page
> aligned. In theory it should be possible to make this work with the
> current uABI, but it's not worth it when we know we're replacing it.

I see. Will dig it and see if there is any idea come to my mind.
> 
> >>  		if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE)
> >>  			vma_pagesize = fault_granule;
> >>  		else
> >> @@ -1382,6 +1421,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	 */
> >>  	if (fault_status == FSC_PERM && vma_pagesize == fault_granule)
> >>  		ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
> >> +	else if (kvm_is_realm(kvm))
> >> +		ret = realm_map_ipa(kvm, fault_ipa, hva, pfn, vma_pagesize,
> >> +				    prot, memcache);
> >>  	else
> >>  		ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
> >>  					     __pfn_to_phys(pfn), prot,
> >> @@ -1437,6 +1479,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >>  	struct kvm_memory_slot *memslot;
> >>  	unsigned long hva;
> >>  	bool is_iabt, write_fault, writable;
> >> +	gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(vcpu->kvm);
> >>  	gfn_t gfn;
> >>  	int ret, idx;
> >>  
> >> @@ -1491,7 +1534,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >>  
> >>  	idx = srcu_read_lock(&vcpu->kvm->srcu);
> >>  
> >> -	gfn = fault_ipa >> PAGE_SHIFT;
> >> +	gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT;
> >>  	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> >>  	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> >>  	write_fault = kvm_is_write_fault(vcpu);
> >> @@ -1536,6 +1579,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >>  		 * of the page size.
> >>  		 */
> >>  		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> >> +		fault_ipa &= ~gpa_stolen_mask;
> >>  		ret = io_mem_abort(vcpu, fault_ipa);
> >>  		goto out_unlock;
> >>  	}
> >> @@ -1617,6 +1661,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> >>  	if (!kvm->arch.mmu.pgt)
> >>  		return false;
> >>  
> > 
> > Does the unprotected (shared) region of a realm support aging?  
> 
> In theory this should be possible to support by unmapping the NS entry
> and handling the fault. But the hardware access flag optimisation isn't
> available with the RMM, and the overhead of RMI calls to unmap/map could
> be significant.
> 
> For now this isn't something we've looked at, but I guess it might be
> worth trying out when we have some real hardware to benchmark on.
> 
> >> +	/* We don't support aging for Realms */
> >> +	if (kvm_is_realm(kvm))
> >> +		return true;
> >> +
> >>  	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
> >>  
> >>  	kpte = kvm_pgtable_stage2_mkold(kvm->arch.mmu.pgt,
> >> @@ -1630,6 +1678,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> >>  	if (!kvm->arch.mmu.pgt)
> >>  		return false;
> >>  
> >> +	/* We don't support aging for Realms */
> >> +	if (kvm_is_realm(kvm))
> >> +		return true;
> >> +
> >>  	return kvm_pgtable_stage2_is_young(kvm->arch.mmu.pgt,
> >>  					   range->start << PAGE_SHIFT);
> >>  }
> >> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> >> index 3405b43e1421..3d46191798e5 100644
> >> --- a/arch/arm64/kvm/rme.c
> >> +++ b/arch/arm64/kvm/rme.c
> >> @@ -608,6 +608,54 @@ int realm_map_protected(struct realm *realm,
> >>  	return -ENXIO;
> >>  }
> >>  
> >> +int realm_map_non_secure(struct realm *realm,
> >> +			 unsigned long ipa,
> >> +			 struct page *page,
> >> +			 unsigned long map_size,
> >> +			 struct kvm_mmu_memory_cache *memcache)
> >> +{
> >> +	phys_addr_t rd = virt_to_phys(realm->rd);
> >> +	int map_level;
> >> +	int ret = 0;
> >> +	unsigned long desc = page_to_phys(page) |
> >> +			     PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) |
> >> +			     /* FIXME: Read+Write permissions for now */  
> > Why can't we handle the prot from the realm_map_ipa()? Working in progress? :)  
> 
> Yes, work in progress - this comes from the "Realms cannot map
> read-only" in user_mem_abort() above. Since all faults are treated as
> write faults we need to upgrade to read/write here too.
> 
> The prot in realm_map_ipa isn't actually useful currently because we
> simply WARN_ON and return if it doesn't have PROT_W. Again this needs to
> be fixed! It's on my todo list ;)
> 
> Steve
> 
> >> +			     (3 << 6) |
> >> +			     PTE_SHARED;
> >> +
> >> +	if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
> >> +		return -EINVAL;
> >> +
> >> +	switch (map_size) {
> >> +	case PAGE_SIZE:
> >> +		map_level = 3;
> >> +		break;
> >> +	case RME_L2_BLOCK_SIZE:
> >> +		map_level = 2;
> >> +		break;
> >> +	default:
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> >> +
> >> +	if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> >> +		/* Create missing RTTs and retry */
> >> +		int level = RMI_RETURN_INDEX(ret);
> >> +
> >> +		ret = realm_create_rtt_levels(realm, ipa, level, map_level,
> >> +					      memcache);
> >> +		if (WARN_ON(ret))
> >> +			return -ENXIO;
> >> +
> >> +		ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> >> +	}
> >> +	if (WARN_ON(ret))
> >> +		return -ENXIO;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>  static int populate_par_region(struct kvm *kvm,
> >>  			       phys_addr_t ipa_base,
> >>  			       phys_addr_t ipa_end)  
> >   
> 


WARNING: multiple messages have this Message-ID (diff)
From: Zhi Wang <zhi.wang.linux@gmail.com>
To: Steven Price <steven.price@arm.com>
Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev,
	Catalin Marinas <catalin.marinas@arm.com>,
	Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	James Morse <james.morse@arm.com>,
	Oliver Upton <oliver.upton@linux.dev>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Zenghui Yu <yuzenghui@huawei.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Christoffer Dall <christoffer.dall@arm.com>,
	Fuad Tabba <tabba@google.com>,
	linux-coco@lists.linux.dev
Subject: Re: [RFC PATCH 17/28] arm64: RME: Runtime faulting of memory
Date: Tue, 14 Mar 2023 18:41:23 +0200	[thread overview]
Message-ID: <20230314184123.000022ee@gmail.com> (raw)
In-Reply-To: <554bbe2d-ead5-187d-7460-a8c03f2528fa@arm.com>

On Fri, 10 Mar 2023 15:47:19 +0000
Steven Price <steven.price@arm.com> wrote:

> On 06/03/2023 18:20, Zhi Wang wrote:
> > On Fri, 27 Jan 2023 11:29:21 +0000
> > Steven Price <steven.price@arm.com> wrote:
> >   
> >> At runtime if the realm guest accesses memory which hasn't yet been
> >> mapped then KVM needs to either populate the region or fault the guest.
> >>
> >> For memory in the lower (protected) region of IPA a fresh page is
> >> provided to the RMM which will zero the contents. For memory in the
> >> upper (shared) region of IPA, the memory from the memslot is mapped
> >> into the realm VM non secure.
> >>
> >> Signed-off-by: Steven Price <steven.price@arm.com>
> >> ---
> >>  arch/arm64/include/asm/kvm_emulate.h | 10 +++++
> >>  arch/arm64/include/asm/kvm_rme.h     | 12 ++++++
> >>  arch/arm64/kvm/mmu.c                 | 64 +++++++++++++++++++++++++---
> >>  arch/arm64/kvm/rme.c                 | 48 +++++++++++++++++++++
> >>  4 files changed, 128 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> >> index 285e62914ca4..3a71b3d2e10a 100644
> >> --- a/arch/arm64/include/asm/kvm_emulate.h
> >> +++ b/arch/arm64/include/asm/kvm_emulate.h
> >> @@ -502,6 +502,16 @@ static inline enum realm_state kvm_realm_state(struct kvm *kvm)
> >>  	return READ_ONCE(kvm->arch.realm.state);
> >>  }
> >>  
> >> +static inline gpa_t kvm_gpa_stolen_bits(struct kvm *kvm)
> >> +{
> >> +	if (kvm_is_realm(kvm)) {
> >> +		struct realm *realm = &kvm->arch.realm;
> >> +
> >> +		return BIT(realm->ia_bits - 1);
> >> +	}
> >> +	return 0;
> >> +}
> >> +  
> > 
> > "stolen" seems a little bit vague. Maybe "shared" bit would be better as
> > SEV-SNP has C bit and TDX has shared bit. It would be nice to align with
> > the common knowledge.  
> 
> The Arm CCA term is the "protected" bit[1] - although the bit is
> backwards as it's cleared to indicate protected... so not ideal naming! ;)
> 
> But it's termed 'stolen' here as it's effectively removed from the set
> of value address bits. And this function is returning a mask of the bits
> that are not available as address bits. The naming was meant to be
> generic that this could encompass other features that need to reserve
> IPA bits.
> 
> But it's possible this is too generic and perhaps we should just deal
> with a single bit rather than potential masks. Alternatively we could
> invert this and return a set of valid bits:
> 
> static inline gpa_t kvm_gpa_valid_bits(struct kvm *kvm)
> {
> 	if (kvm_is_realm(kvm)) {
> 		struct realm *realm = &kvm->arch.realm;
> 
> 		return ~BIT(realm->ia_bits - 1);
> 	}
> 	return ~(gpa_t)0;
> }
> 
> That would at least match the current usage where the inverse is what we
> need.
> 
> So SEV-SNP or TDX have a concept of a mask to apply to addresses from
> the guest? Can we steal any existing terms?
> 

In a general level, they are using "shared"/"private". TDX is using a
function kvm_gfn_shared_mask() to get the mask and three other marcos to
apply the mask on GPA (IPA)[1]. SEV-SNP is re-using SME macros e.g.
__sme_clr() to apply the mask[2].

Guess we can take them as an reference: using an inline function to get
the protected_bit_mask, like kvm_ipa_protected_mask() with the spec text
you pasted in the comment of the function. The name echoes the spec
description.

Then another necessary functions like kvm_gpa_{is, to}_{shared, private}
, which applies the mask to a GPA(IPA), to echo the terms in the generic
KVM knowledge. (Guess we can refine that with realm_is_addr_protected()).

[1] https://www.spinics.net/lists/kernel/msg4718104.html
[2] https://lore.kernel.org/lkml/20230220183847.59159-25-michael.roth@amd.com/

> 
> [1] Technically the spec only states: "Software in a Realm should treat
> the most significant bit of an IPA as a protection attribute." I don't
> think the bit is directly referred to in the spec as anything other than
> "the most significant bit". Although that in itself is confusing as it
> is the most significant *active* bit (i.e the configured IPA size
> changes which bit is used).
> 
> > Also, it would be nice to change the name of gpa_stolen_mask accordingly.
> >   
> >>  static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
> >>  {
> >>  	if (static_branch_unlikely(&kvm_rme_is_available))
> >> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> >> index 9d1583c44a99..303e4a5e5704 100644
> >> --- a/arch/arm64/include/asm/kvm_rme.h
> >> +++ b/arch/arm64/include/asm/kvm_rme.h
> >> @@ -50,6 +50,18 @@ void kvm_destroy_rec(struct kvm_vcpu *vcpu);
> >>  int kvm_rec_enter(struct kvm_vcpu *vcpu);
> >>  int handle_rme_exit(struct kvm_vcpu *vcpu, int rec_run_status);
> >>  
> >> +void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size);
> >> +int realm_map_protected(struct realm *realm,
> >> +			unsigned long hva,
> >> +			unsigned long base_ipa,
> >> +			struct page *dst_page,
> >> +			unsigned long map_size,
> >> +			struct kvm_mmu_memory_cache *memcache);
> >> +int realm_map_non_secure(struct realm *realm,
> >> +			 unsigned long ipa,
> >> +			 struct page *page,
> >> +			 unsigned long map_size,
> >> +			 struct kvm_mmu_memory_cache *memcache);
> >>  int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> >>  			unsigned long addr, unsigned long end,
> >>  			unsigned long ripas);
> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> >> index f29558c5dcbc..5417c273861b 100644
> >> --- a/arch/arm64/kvm/mmu.c
> >> +++ b/arch/arm64/kvm/mmu.c
> >> @@ -235,8 +235,13 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
> >>  
> >>  	lockdep_assert_held_write(&kvm->mmu_lock);
> >>  	WARN_ON(size & ~PAGE_MASK);
> >> -	WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap,
> >> -				   may_block));
> >> +
> >> +	if (kvm_is_realm(kvm))
> >> +		kvm_realm_unmap_range(kvm, start, size);
> >> +	else
> >> +		WARN_ON(stage2_apply_range(kvm, start, end,
> >> +					   kvm_pgtable_stage2_unmap,
> >> +					   may_block));
> >>  }
> >>  
> >>  static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
> >> @@ -250,7 +255,11 @@ static void stage2_flush_memslot(struct kvm *kvm,
> >>  	phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
> >>  	phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
> >>  
> >> -	stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
> >> +	if (kvm_is_realm(kvm))
> >> +		kvm_realm_unmap_range(kvm, addr, end - addr);
> >> +	else
> >> +		stage2_apply_range_resched(kvm, addr, end,
> >> +					   kvm_pgtable_stage2_flush);
> >>  }
> >>  
> >>  /**
> >> @@ -818,6 +827,10 @@ void stage2_unmap_vm(struct kvm *kvm)
> >>  	struct kvm_memory_slot *memslot;
> >>  	int idx, bkt;
> >>  
> >> +	/* For realms this is handled by the RMM so nothing to do here */
> >> +	if (kvm_is_realm(kvm))
> >> +		return;
> >> +
> >>  	idx = srcu_read_lock(&kvm->srcu);
> >>  	mmap_read_lock(current->mm);
> >>  	write_lock(&kvm->mmu_lock);
> >> @@ -840,6 +853,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
> >>  	pgt = mmu->pgt;
> >>  	if (kvm_is_realm(kvm) &&
> >>  	    kvm_realm_state(kvm) != REALM_STATE_DYING) {
> >> +		unmap_stage2_range(mmu, 0, (~0ULL) & PAGE_MASK);
> >>  		write_unlock(&kvm->mmu_lock);
> >>  		kvm_realm_destroy_rtts(&kvm->arch.realm, pgt->ia_bits,
> >>  				       pgt->start_level);
> >> @@ -1190,6 +1204,24 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> >>  	return vma->vm_flags & VM_MTE_ALLOWED;
> >>  }
> >>  
> >> +static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa, unsigned long hva,
> >> +			 kvm_pfn_t pfn, unsigned long map_size,
> >> +			 enum kvm_pgtable_prot prot,
> >> +			 struct kvm_mmu_memory_cache *memcache)
> >> +{
> >> +	struct realm *realm = &kvm->arch.realm;
> >> +	struct page *page = pfn_to_page(pfn);
> >> +
> >> +	if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W)))
> >> +		return -EFAULT;
> >> +
> >> +	if (!realm_is_addr_protected(realm, ipa))
> >> +		return realm_map_non_secure(realm, ipa, page, map_size,
> >> +					    memcache);
> >> +
> >> +	return realm_map_protected(realm, hva, ipa, page, map_size, memcache);
> >> +}
> >> +
> >>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  			  struct kvm_memory_slot *memslot, unsigned long hva,
> >>  			  unsigned long fault_status)
> >> @@ -1210,9 +1242,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	unsigned long vma_pagesize, fault_granule;
> >>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
> >>  	struct kvm_pgtable *pgt;
> >> +	gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(vcpu->kvm);
> >>  
> >>  	fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
> >>  	write_fault = kvm_is_write_fault(vcpu);
> >> +
> >> +	/* Realms cannot map read-only */  
> > 
> > Out of curiosity, why? It would be nice to have more explanation in the
> > comment.  
> 
> The RMM specification doesn't support mapping protected memory read
> only. I don't believe there is any reason why it couldn't, but equally I
> don't think there any use cases for a guest needing read-only pages so
> this just isn't supported by the RMM. Since the page is necessarily
> taken away from the host it's fairly irrelevant (from the host's
> perspective) whether it is actually read only or not.
> 
> However, this is technically wrong for the case of unprotected (shared)
> pages - it should be possible to map those read only. But I need to have
> a think about how to fix that up.

If the fault IPA carries the protected bit, can't we do like:

if (vcpu_is_rec(vcpu) && fault_ipa_is_protected)
	write_fault = true

Are there still other gaps?
>
> >> +	if (vcpu_is_rec(vcpu))
> >> +		write_fault = true;
> >> +
> >>  	exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> >>  	VM_BUG_ON(write_fault && exec_fault);
> >>  
> >> @@ -1272,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
> >>  		fault_ipa &= ~(vma_pagesize - 1);
> >>  
> >> -	gfn = fault_ipa >> PAGE_SHIFT;
> >> +	gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT;
> >>  	mmap_read_unlock(current->mm);
> >>  
> >>  	/*
> >> @@ -1345,7 +1383,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	 * If we are not forced to use page mapping, check if we are
> >>  	 * backed by a THP and thus use block mapping if possible.
> >>  	 */
> >> -	if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
> >> +	/* FIXME: We shouldn't need to disable this for realms */
> >> +	if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) {  
> > 
> > Why do we have to disable this temporarily?  
> 
> The current uABI (not using memfd) has some serious issues regarding
> huge page support. KVM normally follows the user space mappings of the
> memslot - so if user space has a huge page (transparent or hugetlbs)
> then stage 2 for the guest also gets one.
> 
> However realms sometimes require that the stage 2 differs. The main
> examples are:
> 
>  * RIPAS - if part of a huge page is RIPAS_RAM and part RIPAS_EMPTY then
> the huge page would have to be split.
> 
>  * Initially populated memory: basically the same as above - if the
> populated memory doesn't perfectly align with huge pages, then the
> head/tail pages would need to be broken up.
> 
> Removing this hack allows the huge pages to be created in stage 2, but
> then causes overmapping of the initial contents, then later on when the
> VMM (or guest) attempts to change the properties of the misaligned tail
> it gets an error because the pages are already present in stage 2.
> 
> The planned solution to all this is to stop following the user space
> page tables and create huge pages opportunistically from the memfd that
> backs the protected range. For now this hack exists to avoid things
> "randomly" failing when e.g. the initial kernel image isn't huge page
> aligned. In theory it should be possible to make this work with the
> current uABI, but it's not worth it when we know we're replacing it.

I see. Will dig it and see if there is any idea come to my mind.
> 
> >>  		if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE)
> >>  			vma_pagesize = fault_granule;
> >>  		else
> >> @@ -1382,6 +1421,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >>  	 */
> >>  	if (fault_status == FSC_PERM && vma_pagesize == fault_granule)
> >>  		ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
> >> +	else if (kvm_is_realm(kvm))
> >> +		ret = realm_map_ipa(kvm, fault_ipa, hva, pfn, vma_pagesize,
> >> +				    prot, memcache);
> >>  	else
> >>  		ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
> >>  					     __pfn_to_phys(pfn), prot,
> >> @@ -1437,6 +1479,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >>  	struct kvm_memory_slot *memslot;
> >>  	unsigned long hva;
> >>  	bool is_iabt, write_fault, writable;
> >> +	gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(vcpu->kvm);
> >>  	gfn_t gfn;
> >>  	int ret, idx;
> >>  
> >> @@ -1491,7 +1534,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >>  
> >>  	idx = srcu_read_lock(&vcpu->kvm->srcu);
> >>  
> >> -	gfn = fault_ipa >> PAGE_SHIFT;
> >> +	gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT;
> >>  	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> >>  	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> >>  	write_fault = kvm_is_write_fault(vcpu);
> >> @@ -1536,6 +1579,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >>  		 * of the page size.
> >>  		 */
> >>  		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> >> +		fault_ipa &= ~gpa_stolen_mask;
> >>  		ret = io_mem_abort(vcpu, fault_ipa);
> >>  		goto out_unlock;
> >>  	}
> >> @@ -1617,6 +1661,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> >>  	if (!kvm->arch.mmu.pgt)
> >>  		return false;
> >>  
> > 
> > Does the unprotected (shared) region of a realm support aging?  
> 
> In theory this should be possible to support by unmapping the NS entry
> and handling the fault. But the hardware access flag optimisation isn't
> available with the RMM, and the overhead of RMI calls to unmap/map could
> be significant.
> 
> For now this isn't something we've looked at, but I guess it might be
> worth trying out when we have some real hardware to benchmark on.
> 
> >> +	/* We don't support aging for Realms */
> >> +	if (kvm_is_realm(kvm))
> >> +		return true;
> >> +
> >>  	WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE);
> >>  
> >>  	kpte = kvm_pgtable_stage2_mkold(kvm->arch.mmu.pgt,
> >> @@ -1630,6 +1678,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> >>  	if (!kvm->arch.mmu.pgt)
> >>  		return false;
> >>  
> >> +	/* We don't support aging for Realms */
> >> +	if (kvm_is_realm(kvm))
> >> +		return true;
> >> +
> >>  	return kvm_pgtable_stage2_is_young(kvm->arch.mmu.pgt,
> >>  					   range->start << PAGE_SHIFT);
> >>  }
> >> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> >> index 3405b43e1421..3d46191798e5 100644
> >> --- a/arch/arm64/kvm/rme.c
> >> +++ b/arch/arm64/kvm/rme.c
> >> @@ -608,6 +608,54 @@ int realm_map_protected(struct realm *realm,
> >>  	return -ENXIO;
> >>  }
> >>  
> >> +int realm_map_non_secure(struct realm *realm,
> >> +			 unsigned long ipa,
> >> +			 struct page *page,
> >> +			 unsigned long map_size,
> >> +			 struct kvm_mmu_memory_cache *memcache)
> >> +{
> >> +	phys_addr_t rd = virt_to_phys(realm->rd);
> >> +	int map_level;
> >> +	int ret = 0;
> >> +	unsigned long desc = page_to_phys(page) |
> >> +			     PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) |
> >> +			     /* FIXME: Read+Write permissions for now */  
> > Why can't we handle the prot from the realm_map_ipa()? Working in progress? :)  
> 
> Yes, work in progress - this comes from the "Realms cannot map
> read-only" in user_mem_abort() above. Since all faults are treated as
> write faults we need to upgrade to read/write here too.
> 
> The prot in realm_map_ipa isn't actually useful currently because we
> simply WARN_ON and return if it doesn't have PROT_W. Again this needs to
> be fixed! It's on my todo list ;)
> 
> Steve
> 
> >> +			     (3 << 6) |
> >> +			     PTE_SHARED;
> >> +
> >> +	if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
> >> +		return -EINVAL;
> >> +
> >> +	switch (map_size) {
> >> +	case PAGE_SIZE:
> >> +		map_level = 3;
> >> +		break;
> >> +	case RME_L2_BLOCK_SIZE:
> >> +		map_level = 2;
> >> +		break;
> >> +	default:
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> >> +
> >> +	if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> >> +		/* Create missing RTTs and retry */
> >> +		int level = RMI_RETURN_INDEX(ret);
> >> +
> >> +		ret = realm_create_rtt_levels(realm, ipa, level, map_level,
> >> +					      memcache);
> >> +		if (WARN_ON(ret))
> >> +			return -ENXIO;
> >> +
> >> +		ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> >> +	}
> >> +	if (WARN_ON(ret))
> >> +		return -ENXIO;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>  static int populate_par_region(struct kvm *kvm,
> >>  			       phys_addr_t ipa_base,
> >>  			       phys_addr_t ipa_end)  
> >   
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-03-14 16:41 UTC|newest]

Thread overview: 386+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-27 11:22 [RFC] Support for Arm CCA VMs on Linux Suzuki K Poulose
2023-01-27 11:22 ` Suzuki K Poulose
2023-01-27 11:27 ` [RFC PATCH 00/14] arm64: Support for running as a guest in Arm CCA Steven Price
2023-01-27 11:27   ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 01/14] arm64: remove redundant 'extern' Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 02/14] arm64: rsi: Add RSI definitions Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 03/14] arm64: Detect if in a realm and set RIPAS RAM Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 04/14] arm64: realm: Query IPA size from the RMM Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 05/14] arm64: Mark all I/O as non-secure shared Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 06/14] fixmap: Allow architecture overriding set_fixmap_io Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 07/14] arm64: Override set_fixmap_io Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 08/14] arm64: Make the PHYS_MASK_SHIFT dynamic Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-29  2:59     ` kernel test robot
2023-01-29 20:54     ` kernel test robot
2023-01-27 11:27   ` [RFC PATCH 09/14] arm64: Enforce bounce buffers for realm DMA Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 10/14] arm64: Enable memory encrypt for Realms Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 11/14] arm64: Force device mappings to be non-secure shared Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 12/14] efi: arm64: Map Device with Prot Shared Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 13/14] arm64: realm: Support nonsecure ITS emulation shared Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:27   ` [RFC PATCH 14/14] HACK: Accept prototype RSI version Steven Price
2023-01-27 11:27     ` Steven Price
2023-01-27 11:29 ` [RFC PATCH 00/28] arm64: Support for Arm CCA in KVM Steven Price
2023-01-27 11:29   ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 01/28] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 02/28] arm64: RME: Add SMC definitions for calling the RMM Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 03/28] arm64: RME: Add wrappers for RMI calls Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-13 16:43     ` Zhi Wang
2023-02-13 16:43       ` Zhi Wang
2024-03-18  7:03     ` Ganapatrao Kulkarni
2024-03-18  7:03       ` Ganapatrao Kulkarni
2024-03-18 11:22       ` Steven Price
2024-03-18 11:22         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 04/28] arm64: RME: Check for RME support at KVM init Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-13 15:48     ` Zhi Wang
2023-02-13 15:48       ` Zhi Wang
2023-02-13 15:59       ` Steven Price
2023-02-13 15:59         ` Steven Price
2023-03-04 12:07         ` Zhi Wang
2023-03-04 12:07           ` Zhi Wang
2023-02-13 15:55     ` Zhi Wang
2023-02-13 15:55       ` Zhi Wang
2024-03-18  7:17     ` Ganapatrao Kulkarni
2024-03-18  7:17       ` Ganapatrao Kulkarni
2024-03-18 11:22       ` Steven Price
2024-03-18 11:22         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 05/28] arm64: RME: Define the user ABI Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-29  0:46     ` kernel test robot
2023-02-13 16:04     ` Zhi Wang
2023-02-13 16:04       ` Zhi Wang
2023-03-01 11:54       ` Steven Price
2023-03-01 11:54         ` Steven Price
2023-03-01 20:21         ` Zhi Wang
2023-03-01 20:21           ` Zhi Wang
2023-01-27 11:29   ` [RFC PATCH 06/28] arm64: RME: ioctls to create and configure realms Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-29  6:34     ` kernel test robot
2023-02-07 12:25     ` Jean-Philippe Brucker
2023-02-07 12:25       ` Jean-Philippe Brucker
2023-02-07 12:55       ` Suzuki K Poulose
2023-02-07 12:55         ` Suzuki K Poulose
2023-02-13 16:10     ` Zhi Wang
2023-02-13 16:10       ` Zhi Wang
2023-03-01 11:55       ` Steven Price
2023-03-01 11:55         ` Steven Price
2023-03-01 20:33         ` Zhi Wang
2023-03-01 20:33           ` Zhi Wang
2023-03-06 19:10     ` Zhi Wang
2023-03-06 19:10       ` Zhi Wang
2023-03-10 15:47       ` Steven Price
2023-03-10 15:47         ` Steven Price
2024-03-18  7:40     ` Ganapatrao Kulkarni
2024-03-18  7:40       ` Ganapatrao Kulkarni
2024-03-18 11:22       ` Steven Price
2024-03-18 11:22         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 07/28] arm64: kvm: Allow passing machine type in KVM creation Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-13 16:35     ` Zhi Wang
2023-02-13 16:35       ` Zhi Wang
2023-03-01 11:55       ` Steven Price
2023-03-01 11:55         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 08/28] arm64: RME: Keep a spare page delegated to the RMM Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-13 16:47     ` Zhi Wang
2023-02-13 16:47       ` Zhi Wang
2023-03-01 11:55       ` Steven Price
2023-03-01 11:55         ` Steven Price
2023-03-01 20:50         ` Zhi Wang
2023-03-01 20:50           ` Zhi Wang
2023-01-27 11:29   ` [RFC PATCH 09/28] arm64: RME: RTT handling Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-13 17:44     ` Zhi Wang
2023-02-13 17:44       ` Zhi Wang
2023-03-03 14:04       ` Steven Price
2023-03-03 14:04         ` Steven Price
2023-03-04 12:32         ` Zhi Wang
2023-03-04 12:32           ` Zhi Wang
2024-03-18 11:01     ` Ganapatrao Kulkarni
2024-03-18 11:01       ` Ganapatrao Kulkarni
2024-03-18 11:25       ` Steven Price
2024-03-18 11:25         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 10/28] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-13 18:08     ` Zhi Wang
2023-02-13 18:08       ` Zhi Wang
2023-03-03 14:05       ` Steven Price
2023-03-03 14:05         ` Steven Price
2023-03-04 12:46         ` Zhi Wang
2023-03-04 12:46           ` Zhi Wang
2023-01-27 11:29   ` [RFC PATCH 11/28] arm64: RME: Support for the VGIC in realms Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 12/28] KVM: arm64: Support timers in realm RECs Steven Price
2023-01-27 11:29     ` Steven Price
2024-03-18 11:28     ` Ganapatrao Kulkarni
2024-03-18 11:28       ` Ganapatrao Kulkarni
2024-03-18 14:14       ` Steven Price
2024-03-18 14:14         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 13/28] arm64: RME: Allow VMM to set RIPAS Steven Price
2023-01-27 11:29     ` Steven Price
2023-02-17 13:07     ` Zhi Wang
2023-02-17 13:07       ` Zhi Wang
2023-03-03 14:05       ` Steven Price
2023-03-03 14:05         ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 14/28] arm64: RME: Handle realm enter/exit Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 15/28] KVM: arm64: Handle realm MMIO emulation Steven Price
2023-01-27 11:29     ` Steven Price
2023-03-06 15:37     ` Zhi Wang
2023-03-06 15:37       ` Zhi Wang
2023-03-10 15:47       ` Steven Price
2023-03-10 15:47         ` Steven Price
2023-03-14 15:44         ` Zhi Wang
2023-03-14 15:44           ` Zhi Wang
2023-03-22 11:51           ` Steven Price
2023-03-22 11:51             ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 16/28] arm64: RME: Allow populating initial contents Steven Price
2023-01-27 11:29     ` Steven Price
2023-03-06 17:34     ` Zhi Wang
2023-03-06 17:34       ` Zhi Wang
2023-03-10 15:47       ` Steven Price
2023-03-10 15:47         ` Steven Price
2023-03-14 15:31         ` Zhi Wang
2023-03-14 15:31           ` Zhi Wang
2023-03-22 11:51           ` Steven Price
2023-03-22 11:51             ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 17/28] arm64: RME: Runtime faulting of memory Steven Price
2023-01-27 11:29     ` Steven Price
2023-03-06 18:20     ` Zhi Wang
2023-03-06 18:20       ` Zhi Wang
2023-03-10 15:47       ` Steven Price
2023-03-10 15:47         ` Steven Price
2023-03-14 16:41         ` Zhi Wang [this message]
2023-03-14 16:41           ` Zhi Wang
2023-01-27 11:29   ` [RFC PATCH 18/28] KVM: arm64: Handle realm VCPU load Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 19/28] KVM: arm64: Validate register access for a Realm VM Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 20/28] KVM: arm64: Handle Realm PSCI requests Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 21/28] KVM: arm64: WARN on injected undef exceptions Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 22/28] arm64: Don't expose stolen time for realm guests Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 23/28] KVM: arm64: Allow activating realms Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 24/28] arm64: rme: allow userspace to inject aborts Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 25/28] arm64: rme: support RSI_HOST_CALL Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 26/28] arm64: rme: Allow checking SVE on VM instance Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 27/28] arm64: RME: Always use 4k pages for realms Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:29   ` [RFC PATCH 28/28] HACK: Accept prototype RMI versions Steven Price
2023-01-27 11:29     ` Steven Price
2023-01-27 11:39 ` [RFC kvmtool 00/31] arm64: Support for Arm Confidential Compute Architecture Suzuki K Poulose
2023-01-27 11:39   ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 01/31] arm64: Disable MTE when CFI flash is emulated Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 02/31] script: update_headers: Ignore missing architectures Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 03/31] hw: cfi flash: Handle errors in memory transitions Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 04/31] Add --nocompat option to disable compat warnings Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 12:19     ` Alexandru Elisei
2023-01-27 12:19       ` Alexandru Elisei
2023-01-27 11:39   ` [RFC kvmtool 05/31] arm64: Check pvtime support against the KVM instance Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 06/31] arm64: Check SVE capability on the VM instance Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 07/31] arm64: Add option to disable SVE Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 08/31] linux: Update kernel headers for RME support Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 09/31] arm64: Add --realm command line option Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 10/31] arm64: Create a realm virtual machine Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 11/31] arm64: Lock realm RAM in memory Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 12/31] arm64: Create Realm Descriptor Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 13/31] arm64: Add --measurement-algo command line option for a realm Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 14/31] arm64: Add configuration step for Realms Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 15/31] arm64: Add support for Realm Personalisation Value Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 16/31] arm64: Add support for specifying the SVE vector length for Realm Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 17/31] arm: Add kernel size to VM context Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 18/31] arm64: Populate initial realm contents Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-03-02 14:03     ` Piotr Sawicki
2023-03-02 14:03       ` Piotr Sawicki
2023-03-02 14:06       ` Suzuki K Poulose
2023-03-02 14:06         ` Suzuki K Poulose
2023-10-02  9:28         ` Piotr Sawicki
2023-10-02  9:28           ` Piotr Sawicki
2023-01-27 11:39   ` [RFC kvmtool 19/31] arm64: Don't try to set PSTATE for VCPUs belonging to a realm Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 20/31] arm64: Finalize realm VCPU after reset Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 21/31] init: Add last_{init, exit} list macros Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 22/31] arm64: Activate realm before the first VCPU is run Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 23/31] arm64: Specify SMC as the PSCI conduits for realms Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 24/31] arm64: Don't try to debug a realm Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 25/31] arm64: realm: Double the IPA space Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 26/31] virtio: Add a wrapper for get_host_features Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 27/31] virtio: Add arch specific hook for virtio host flags Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 28/31] arm64: realm: Enforce virtio F_ACCESS_PLATFORM flag Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 29/31] mmio: add arch hook for an unhandled MMIO access Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 30/31] arm64: realm: inject an abort on " Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-01-27 11:39   ` [RFC kvmtool 31/31] arm64: Allow the user to create a realm Suzuki K Poulose
2023-01-27 11:39     ` Suzuki K Poulose
2023-10-02  9:45   ` [RFC kvmtool 00/31] arm64: Support for Arm Confidential Compute Architecture Piotr Sawicki
2023-10-02  9:45     ` Piotr Sawicki
2023-01-27 11:40 ` [RFC kvm-unit-tests 00/27] " Joey Gouly
2023-01-27 11:40   ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 01/27] lib/string: include stddef.h for size_t Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-31 14:43     ` Thomas Huth
2023-01-31 14:43       ` Thomas Huth
2023-01-27 11:40   ` [RFC kvm-unit-tests 02/27] arm: Expand SMCCC arguments and return values Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 03/27] arm: realm: Add RSI interface header Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 04/27] arm: Make physical address mask dynamic Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 05/27] arm: Introduce NS_SHARED PTE attribute Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 06/27] arm: Move io_init after vm initialization Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 07/27] arm: realm: Make uart available before MMU is enabled Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 08/27] arm: realm: Realm initialisation Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 09/27] arm: realm: Add support for changing the state of memory Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 10/27] arm: realm: Set RIPAS state for RAM Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 11/27] arm: realm: Early memory setup Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 12/27] arm: realm: Add RSI version test Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 13/27] arm: selftest: realm: skip pabt test when running in a realm Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 14/27] arm: realm: add hvc and RSI_HOST_CALL tests Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 15/27] arm: realm: Add test for FPU/SIMD context save/restore Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 16/27] arm: realm: Add tests for in realm SEA Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 17/27] lib/alloc_page: Add shared page allocation support Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:40   ` [RFC kvm-unit-tests 18/27] arm: gic-v3-its: Use shared pages wherever needed Joey Gouly
2023-01-27 11:40     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 19/27] arm: realm: Enable memory encryption Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 20/27] qcbor: Add QCBOR as a submodule Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 21/27] arm: Add build steps for QCBOR library Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 22/27] arm: Add a library to verify tokens using the " Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 23/27] arm: realm: add RSI interface for attestation measurements Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 24/27] arm: realm: Add helpers to decode RSI return codes Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 25/27] arm: realm: Add Realm attestation tests Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 26/27] arm: realm: Add a test for shared memory Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 11:41   ` [RFC kvm-unit-tests 27/27] NOT-FOR-MERGING: add run-realm-tests Joey Gouly
2023-01-27 11:41     ` Joey Gouly
2023-01-27 15:26 ` [RFC] Support for Arm CCA VMs on Linux Jean-Philippe Brucker
2023-01-27 15:26   ` Jean-Philippe Brucker
2023-02-28 23:35   ` Itaru Kitayama
2023-02-28 23:35     ` Itaru Kitayama
2023-03-01  9:20     ` Jean-Philippe Brucker
2023-03-01  9:20       ` Jean-Philippe Brucker
2023-03-01 22:12       ` Itaru Kitayama
2023-03-01 22:12         ` Itaru Kitayama
2023-03-02  9:18         ` Jean-Philippe Brucker
2023-03-02  9:18           ` Jean-Philippe Brucker
2023-03-03  9:46         ` Jean-Philippe Brucker
2023-03-03  9:46           ` Jean-Philippe Brucker
2023-03-03  9:54           ` Suzuki K Poulose
2023-03-03  9:54             ` Suzuki K Poulose
2023-03-03 11:39             ` Jean-Philippe Brucker
2023-03-03 11:39               ` Jean-Philippe Brucker
2023-03-03 12:08               ` Andrew Jones
2023-03-03 12:08                 ` Andrew Jones
2023-03-03 12:19                 ` Suzuki K Poulose
2023-03-03 12:19                   ` Suzuki K Poulose
2023-03-03 13:06                   ` Cornelia Huck
2023-03-03 13:06                     ` Cornelia Huck
2023-03-03 13:57                     ` Jean-Philippe Brucker
2023-03-03 13:57                       ` Jean-Philippe Brucker
2023-02-10 16:51 ` Ryan Roberts
2023-02-10 16:51   ` Ryan Roberts
2023-02-10 22:53   ` Itaru Kitayama
2023-02-10 22:53     ` Itaru Kitayama
2023-02-17  8:02     ` Itaru Kitayama
2023-02-17  8:02       ` Itaru Kitayama
2023-02-20 10:51       ` Ryan Roberts
2023-02-20 10:51         ` Ryan Roberts
2023-02-14 17:13 ` Dr. David Alan Gilbert
2023-02-14 17:13   ` Dr. David Alan Gilbert
2023-03-01  9:58   ` Suzuki K Poulose
2023-03-01  9:58     ` Suzuki K Poulose
2023-03-02 16:46     ` Dr. David Alan Gilbert
2023-03-02 16:46       ` Dr. David Alan Gilbert
2023-03-02 19:02       ` Suzuki K Poulose
2023-03-02 19:02         ` Suzuki K Poulose
2023-07-14 13:46 ` Jonathan Cameron
2023-07-14 13:46   ` Jonathan Cameron
2023-07-14 13:46   ` Jonathan Cameron
2023-07-14 15:03   ` Suzuki K Poulose
2023-07-14 15:03     ` Suzuki K Poulose
2023-07-14 16:28     ` Jonathan Cameron
2023-07-14 16:28       ` Jonathan Cameron
2023-07-14 16:28       ` Jonathan Cameron
2023-07-17  9:40       ` Suzuki K Poulose
2023-07-17  9:40         ` Suzuki K Poulose
2023-10-02 12:43 ` Suzuki K Poulose
2023-10-02 12:43   ` Suzuki K Poulose
2024-01-10  5:40   ` Itaru Kitayama
2024-01-10  5:40     ` Itaru Kitayama
2024-01-10 11:41     ` Suzuki K Poulose
2024-01-10 11:41       ` Suzuki K Poulose
2024-01-10 13:44       ` Suzuki K Poulose
2024-01-10 13:44         ` Suzuki K Poulose
2024-01-19  1:26         ` Itaru Kitayama
2024-01-19  1:26           ` Itaru Kitayama
2024-01-12  5:01       ` Itaru Kitayama
2024-01-12  5:01         ` Itaru Kitayama

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230314184123.000022ee@gmail.com \
    --to=zhi.wang.linux@gmail.com \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=christoffer.dall@arm.com \
    --cc=james.morse@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=steven.price@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.