All of lore.kernel.org
 help / color / mirror / Atom feed
From: Quentin Perret <qperret@google.com>
To: Will Deacon <will@kernel.org>
Cc: catalin.marinas@arm.com, maz@kernel.org, james.morse@arm.com,
	julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com,
	android-kvm@google.com, linux-kernel@vger.kernel.org,
	kernel-team@android.com, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org, tabba@google.com,
	mark.rutland@arm.com, dbrazdil@google.com, mate.toth-pal@arm.com,
	seanjc@google.com, robh+dt@kernel.org
Subject: Re: [PATCH v3 28/32] KVM: arm64: Add kvm_pgtable_stage2_idmap_greedy()
Date: Fri, 5 Mar 2021 15:03:36 +0000	[thread overview]
Message-ID: <YEJISCQOHNbs363+@google.com> (raw)
In-Reply-To: <20210305143941.GA23017@willie-the-truck>

On Friday 05 Mar 2021 at 14:39:42 (+0000), Will Deacon wrote:
> On Tue, Mar 02, 2021 at 02:59:58PM +0000, Quentin Perret wrote:
> > +/**
> > + * kvm_pgtable_stage2_idmap_greedy() - Identity-map an Intermediate Physical
> > + *				       Address with a leaf entry at the highest
> > + *				       possible level.
> 
> Not sure it's worth mentioning "highest possible level" here, as
> realistically the caller still has to provide a memcache to deal with the
> worst case and the structure of the page-table shouldn't matter.

Right, we need to pass a range so I suppose that should be enough to
say 'this tries to cover large portions of memory'.

> > + * @pgt:	Page-table structure initialised by kvm_pgtable_*_init().
> > + * @addr:	Input address to identity-map.
> > + * @prot:	Permissions and attributes for the mapping.
> > + * @range:	Boundaries of the maximum memory region to map.
> > + * @mc:		Cache of pre-allocated memory from which to allocate page-table
> > + *		pages.
> > + *
> > + * This function attempts to install high-level identity-mappings covering @addr
> 
> "high-level"? (again, I think I'd just drop this)
> 
> > + * without overriding existing mappings with incompatible permissions or
> > + * attributes. An existing table entry may be coalesced into a block mapping
> > + * if and only if it covers @addr and all its leafs are either invalid and/or
> 
> s/leafs/leaf entries/

Ack for both.

> > + * have permissions and attributes strictly matching @prot. The mapping is
> > + * guaranteed to be contained within the boundaries specified by @range at call
> > + * time. If only a subset of the memory specified by @range is mapped (because
> > + * of e.g. alignment issues or existing incompatible mappings), @range will be
> > + * updated accordingly.
> > + *
> > + * Return: 0 on success, negative error code on failure.
> > + */
> > +int kvm_pgtable_stage2_idmap_greedy(struct kvm_pgtable *pgt, u64 addr,
> > +				    enum kvm_pgtable_prot prot,
> > +				    struct kvm_mem_range *range,
> > +				    void *mc);
> >  #endif	/* __ARM64_KVM_PGTABLE_H__ */
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 8aa01a9e2603..6897d771e2b2 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -987,3 +987,122 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt)
> >  	pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz);
> >  	pgt->pgd = NULL;
> >  }
> > +
> > +struct stage2_reduce_range_data {
> > +	kvm_pte_t attr;
> > +	u64 target_addr;
> > +	u32 start_level;
> > +	struct kvm_mem_range *range;
> > +};
> > +
> > +static int __stage2_reduce_range(struct stage2_reduce_range_data *data, u64 addr)
> > +{
> > +	u32 level = data->start_level;
> > +
> > +	for (; level < KVM_PGTABLE_MAX_LEVELS; level++) {
> > +		u64 granule = kvm_granule_size(level);
> > +		u64 start = ALIGN_DOWN(data->target_addr, granule);
> > +		u64 end = start + granule;
> > +
> > +		/*
> > +		 * The pinned address is in the current range, try one level
> > +		 * deeper.
> > +		 */
> > +		if (start == ALIGN_DOWN(addr, granule))
> > +			continue;
> > +
> > +		/*
> > +		 * Make sure the current range is a reduction of the existing
> > +		 * range before updating it.
> > +		 */
> > +		if (data->range->start <= start && end <= data->range->end) {
> > +			data->start_level = level;
> > +			data->range->start = start;
> > +			data->range->end = end;
> > +			return 0;
> > +		}
> > +	}
> > +
> > +	return -EINVAL;
> > +}
> > +
> > +#define KVM_PTE_LEAF_S2_COMPAT_MASK	(KVM_PTE_LEAF_ATTR_S2_PERMS | \
> > +					 KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR | \
> > +					 KVM_PTE_LEAF_SW_BIT_PROT_NONE)
> > +
> > +static int stage2_reduce_range_walker(u64 addr, u64 end, u32 level,
> > +				      kvm_pte_t *ptep,
> > +				      enum kvm_pgtable_walk_flags flag,
> > +				      void * const arg)
> > +{
> > +	struct stage2_reduce_range_data *data = arg;
> > +	kvm_pte_t attr;
> > +	int ret;
> > +
> > +	if (addr < data->range->start || addr >= data->range->end)
> > +		return 0;
> > +
> > +	attr = *ptep & KVM_PTE_LEAF_S2_COMPAT_MASK;
> > +	if (!attr || attr == data->attr)
> > +		return 0;
> > +
> > +	/*
> > +	 * An existing mapping with incompatible protection attributes is
> > +	 * 'pinned', so reduce the range if we hit one.
> > +	 */
> > +	ret = __stage2_reduce_range(data, addr);
> > +	if (ret)
> > +		return ret;
> > +
> > +	return -EAGAIN;
> > +}
> > +
> > +static int stage2_reduce_range(struct kvm_pgtable *pgt, u64 addr,
> > +			       enum kvm_pgtable_prot prot,
> > +			       struct kvm_mem_range *range)
> > +{
> > +	struct stage2_reduce_range_data data = {
> > +		.start_level	= pgt->start_level,
> > +		.range		= range,
> > +		.target_addr	= addr,
> > +	};
> > +	struct kvm_pgtable_walker walker = {
> > +		.cb		= stage2_reduce_range_walker,
> > +		.flags		= KVM_PGTABLE_WALK_LEAF,
> > +		.arg		= &data,
> > +	};
> > +	int ret;
> > +
> > +	data.attr = stage2_get_prot_attr(prot) & KVM_PTE_LEAF_S2_COMPAT_MASK;
> > +	if (!data.attr)
> > +		return -EINVAL;
> 
> (this will need updating based on the other discussion we had)

Ack.

> > +	/* Reduce the kvm_mem_range to a granule size */
> > +	ret = __stage2_reduce_range(&data, range->end);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/* Walk the range to check permissions and reduce further if needed */
> > +	do {
> > +		ret = kvm_pgtable_walk(pgt, range->start, range->end, &walker);
> 
> (we spent some time debugging an issue here and you spotted that you're
> passing range->end instead of the size ;)

Yep, I have the fix applied locally, and ready to fly in v4 :)

> > +	} while (ret == -EAGAIN);
> 
> I'm a bit nervous about this loop -- what guarantees forward progress here?
> Can we return to the host after a few tries instead?

-EAGAIN only happens when we've been able to successfully reduce the
range to a potentially valid granule size. That can't happen infinitely.

We're guaranteed to fail when trying to reduce the range to a
granularity smaller than PAGE_SIZE (the -EINVAL case of
__stage2_reduce_range), which is indicative of a host memory abort in a
page it should not access (because marked PROT_NONE for instance).

Cheers,
Quentin

WARNING: multiple messages have this Message-ID (diff)
From: Quentin Perret <qperret@google.com>
To: Will Deacon <will@kernel.org>
Cc: android-kvm@google.com, catalin.marinas@arm.com,
	mate.toth-pal@arm.com, seanjc@google.com, tabba@google.com,
	linux-kernel@vger.kernel.org, robh+dt@kernel.org,
	linux-arm-kernel@lists.infradead.org, maz@kernel.org,
	kernel-team@android.com, kvmarm@lists.cs.columbia.edu
Subject: Re: [PATCH v3 28/32] KVM: arm64: Add kvm_pgtable_stage2_idmap_greedy()
Date: Fri, 5 Mar 2021 15:03:36 +0000	[thread overview]
Message-ID: <YEJISCQOHNbs363+@google.com> (raw)
In-Reply-To: <20210305143941.GA23017@willie-the-truck>

On Friday 05 Mar 2021 at 14:39:42 (+0000), Will Deacon wrote:
> On Tue, Mar 02, 2021 at 02:59:58PM +0000, Quentin Perret wrote:
> > +/**
> > + * kvm_pgtable_stage2_idmap_greedy() - Identity-map an Intermediate Physical
> > + *				       Address with a leaf entry at the highest
> > + *				       possible level.
> 
> Not sure it's worth mentioning "highest possible level" here, as
> realistically the caller still has to provide a memcache to deal with the
> worst case and the structure of the page-table shouldn't matter.

Right, we need to pass a range so I suppose that should be enough to
say 'this tries to cover large portions of memory'.

> > + * @pgt:	Page-table structure initialised by kvm_pgtable_*_init().
> > + * @addr:	Input address to identity-map.
> > + * @prot:	Permissions and attributes for the mapping.
> > + * @range:	Boundaries of the maximum memory region to map.
> > + * @mc:		Cache of pre-allocated memory from which to allocate page-table
> > + *		pages.
> > + *
> > + * This function attempts to install high-level identity-mappings covering @addr
> 
> "high-level"? (again, I think I'd just drop this)
> 
> > + * without overriding existing mappings with incompatible permissions or
> > + * attributes. An existing table entry may be coalesced into a block mapping
> > + * if and only if it covers @addr and all its leafs are either invalid and/or
> 
> s/leafs/leaf entries/

Ack for both.

> > + * have permissions and attributes strictly matching @prot. The mapping is
> > + * guaranteed to be contained within the boundaries specified by @range at call
> > + * time. If only a subset of the memory specified by @range is mapped (because
> > + * of e.g. alignment issues or existing incompatible mappings), @range will be
> > + * updated accordingly.
> > + *
> > + * Return: 0 on success, negative error code on failure.
> > + */
> > +int kvm_pgtable_stage2_idmap_greedy(struct kvm_pgtable *pgt, u64 addr,
> > +				    enum kvm_pgtable_prot prot,
> > +				    struct kvm_mem_range *range,
> > +				    void *mc);
> >  #endif	/* __ARM64_KVM_PGTABLE_H__ */
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 8aa01a9e2603..6897d771e2b2 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -987,3 +987,122 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt)
> >  	pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz);
> >  	pgt->pgd = NULL;
> >  }
> > +
> > +struct stage2_reduce_range_data {
> > +	kvm_pte_t attr;
> > +	u64 target_addr;
> > +	u32 start_level;
> > +	struct kvm_mem_range *range;
> > +};
> > +
> > +static int __stage2_reduce_range(struct stage2_reduce_range_data *data, u64 addr)
> > +{
> > +	u32 level = data->start_level;
> > +
> > +	for (; level < KVM_PGTABLE_MAX_LEVELS; level++) {
> > +		u64 granule = kvm_granule_size(level);
> > +		u64 start = ALIGN_DOWN(data->target_addr, granule);
> > +		u64 end = start + granule;
> > +
> > +		/*
> > +		 * The pinned address is in the current range, try one level
> > +		 * deeper.
> > +		 */
> > +		if (start == ALIGN_DOWN(addr, granule))
> > +			continue;
> > +
> > +		/*
> > +		 * Make sure the current range is a reduction of the existing
> > +		 * range before updating it.
> > +		 */
> > +		if (data->range->start <= start && end <= data->range->end) {
> > +			data->start_level = level;
> > +			data->range->start = start;
> > +			data->range->end = end;
> > +			return 0;
> > +		}
> > +	}
> > +
> > +	return -EINVAL;
> > +}
> > +
> > +#define KVM_PTE_LEAF_S2_COMPAT_MASK	(KVM_PTE_LEAF_ATTR_S2_PERMS | \
> > +					 KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR | \
> > +					 KVM_PTE_LEAF_SW_BIT_PROT_NONE)
> > +
> > +static int stage2_reduce_range_walker(u64 addr, u64 end, u32 level,
> > +				      kvm_pte_t *ptep,
> > +				      enum kvm_pgtable_walk_flags flag,
> > +				      void * const arg)
> > +{
> > +	struct stage2_reduce_range_data *data = arg;
> > +	kvm_pte_t attr;
> > +	int ret;
> > +
> > +	if (addr < data->range->start || addr >= data->range->end)
> > +		return 0;
> > +
> > +	attr = *ptep & KVM_PTE_LEAF_S2_COMPAT_MASK;
> > +	if (!attr || attr == data->attr)
> > +		return 0;
> > +
> > +	/*
> > +	 * An existing mapping with incompatible protection attributes is
> > +	 * 'pinned', so reduce the range if we hit one.
> > +	 */
> > +	ret = __stage2_reduce_range(data, addr);
> > +	if (ret)
> > +		return ret;
> > +
> > +	return -EAGAIN;
> > +}
> > +
> > +static int stage2_reduce_range(struct kvm_pgtable *pgt, u64 addr,
> > +			       enum kvm_pgtable_prot prot,
> > +			       struct kvm_mem_range *range)
> > +{
> > +	struct stage2_reduce_range_data data = {
> > +		.start_level	= pgt->start_level,
> > +		.range		= range,
> > +		.target_addr	= addr,
> > +	};
> > +	struct kvm_pgtable_walker walker = {
> > +		.cb		= stage2_reduce_range_walker,
> > +		.flags		= KVM_PGTABLE_WALK_LEAF,
> > +		.arg		= &data,
> > +	};
> > +	int ret;
> > +
> > +	data.attr = stage2_get_prot_attr(prot) & KVM_PTE_LEAF_S2_COMPAT_MASK;
> > +	if (!data.attr)
> > +		return -EINVAL;
> 
> (this will need updating based on the other discussion we had)

Ack.

> > +	/* Reduce the kvm_mem_range to a granule size */
> > +	ret = __stage2_reduce_range(&data, range->end);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/* Walk the range to check permissions and reduce further if needed */
> > +	do {
> > +		ret = kvm_pgtable_walk(pgt, range->start, range->end, &walker);
> 
> (we spent some time debugging an issue here and you spotted that you're
> passing range->end instead of the size ;)

Yep, I have the fix applied locally, and ready to fly in v4 :)

> > +	} while (ret == -EAGAIN);
> 
> I'm a bit nervous about this loop -- what guarantees forward progress here?
> Can we return to the host after a few tries instead?

-EAGAIN only happens when we've been able to successfully reduce the
range to a potentially valid granule size. That can't happen infinitely.

We're guaranteed to fail when trying to reduce the range to a
granularity smaller than PAGE_SIZE (the -EINVAL case of
__stage2_reduce_range), which is indicative of a host memory abort in a
page it should not access (because marked PROT_NONE for instance).

Cheers,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Quentin Perret <qperret@google.com>
To: Will Deacon <will@kernel.org>
Cc: catalin.marinas@arm.com, maz@kernel.org, james.morse@arm.com,
	julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com,
	android-kvm@google.com, linux-kernel@vger.kernel.org,
	kernel-team@android.com, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org, tabba@google.com,
	mark.rutland@arm.com, dbrazdil@google.com, mate.toth-pal@arm.com,
	seanjc@google.com, robh+dt@kernel.org
Subject: Re: [PATCH v3 28/32] KVM: arm64: Add kvm_pgtable_stage2_idmap_greedy()
Date: Fri, 5 Mar 2021 15:03:36 +0000	[thread overview]
Message-ID: <YEJISCQOHNbs363+@google.com> (raw)
In-Reply-To: <20210305143941.GA23017@willie-the-truck>

On Friday 05 Mar 2021 at 14:39:42 (+0000), Will Deacon wrote:
> On Tue, Mar 02, 2021 at 02:59:58PM +0000, Quentin Perret wrote:
> > +/**
> > + * kvm_pgtable_stage2_idmap_greedy() - Identity-map an Intermediate Physical
> > + *				       Address with a leaf entry at the highest
> > + *				       possible level.
> 
> Not sure it's worth mentioning "highest possible level" here, as
> realistically the caller still has to provide a memcache to deal with the
> worst case and the structure of the page-table shouldn't matter.

Right, we need to pass a range so I suppose that should be enough to
say 'this tries to cover large portions of memory'.

> > + * @pgt:	Page-table structure initialised by kvm_pgtable_*_init().
> > + * @addr:	Input address to identity-map.
> > + * @prot:	Permissions and attributes for the mapping.
> > + * @range:	Boundaries of the maximum memory region to map.
> > + * @mc:		Cache of pre-allocated memory from which to allocate page-table
> > + *		pages.
> > + *
> > + * This function attempts to install high-level identity-mappings covering @addr
> 
> "high-level"? (again, I think I'd just drop this)
> 
> > + * without overriding existing mappings with incompatible permissions or
> > + * attributes. An existing table entry may be coalesced into a block mapping
> > + * if and only if it covers @addr and all its leafs are either invalid and/or
> 
> s/leafs/leaf entries/

Ack for both.

> > + * have permissions and attributes strictly matching @prot. The mapping is
> > + * guaranteed to be contained within the boundaries specified by @range at call
> > + * time. If only a subset of the memory specified by @range is mapped (because
> > + * of e.g. alignment issues or existing incompatible mappings), @range will be
> > + * updated accordingly.
> > + *
> > + * Return: 0 on success, negative error code on failure.
> > + */
> > +int kvm_pgtable_stage2_idmap_greedy(struct kvm_pgtable *pgt, u64 addr,
> > +				    enum kvm_pgtable_prot prot,
> > +				    struct kvm_mem_range *range,
> > +				    void *mc);
> >  #endif	/* __ARM64_KVM_PGTABLE_H__ */
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 8aa01a9e2603..6897d771e2b2 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -987,3 +987,122 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt)
> >  	pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz);
> >  	pgt->pgd = NULL;
> >  }
> > +
> > +struct stage2_reduce_range_data {
> > +	kvm_pte_t attr;
> > +	u64 target_addr;
> > +	u32 start_level;
> > +	struct kvm_mem_range *range;
> > +};
> > +
> > +static int __stage2_reduce_range(struct stage2_reduce_range_data *data, u64 addr)
> > +{
> > +	u32 level = data->start_level;
> > +
> > +	for (; level < KVM_PGTABLE_MAX_LEVELS; level++) {
> > +		u64 granule = kvm_granule_size(level);
> > +		u64 start = ALIGN_DOWN(data->target_addr, granule);
> > +		u64 end = start + granule;
> > +
> > +		/*
> > +		 * The pinned address is in the current range, try one level
> > +		 * deeper.
> > +		 */
> > +		if (start == ALIGN_DOWN(addr, granule))
> > +			continue;
> > +
> > +		/*
> > +		 * Make sure the current range is a reduction of the existing
> > +		 * range before updating it.
> > +		 */
> > +		if (data->range->start <= start && end <= data->range->end) {
> > +			data->start_level = level;
> > +			data->range->start = start;
> > +			data->range->end = end;
> > +			return 0;
> > +		}
> > +	}
> > +
> > +	return -EINVAL;
> > +}
> > +
> > +#define KVM_PTE_LEAF_S2_COMPAT_MASK	(KVM_PTE_LEAF_ATTR_S2_PERMS | \
> > +					 KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR | \
> > +					 KVM_PTE_LEAF_SW_BIT_PROT_NONE)
> > +
> > +static int stage2_reduce_range_walker(u64 addr, u64 end, u32 level,
> > +				      kvm_pte_t *ptep,
> > +				      enum kvm_pgtable_walk_flags flag,
> > +				      void * const arg)
> > +{
> > +	struct stage2_reduce_range_data *data = arg;
> > +	kvm_pte_t attr;
> > +	int ret;
> > +
> > +	if (addr < data->range->start || addr >= data->range->end)
> > +		return 0;
> > +
> > +	attr = *ptep & KVM_PTE_LEAF_S2_COMPAT_MASK;
> > +	if (!attr || attr == data->attr)
> > +		return 0;
> > +
> > +	/*
> > +	 * An existing mapping with incompatible protection attributes is
> > +	 * 'pinned', so reduce the range if we hit one.
> > +	 */
> > +	ret = __stage2_reduce_range(data, addr);
> > +	if (ret)
> > +		return ret;
> > +
> > +	return -EAGAIN;
> > +}
> > +
> > +static int stage2_reduce_range(struct kvm_pgtable *pgt, u64 addr,
> > +			       enum kvm_pgtable_prot prot,
> > +			       struct kvm_mem_range *range)
> > +{
> > +	struct stage2_reduce_range_data data = {
> > +		.start_level	= pgt->start_level,
> > +		.range		= range,
> > +		.target_addr	= addr,
> > +	};
> > +	struct kvm_pgtable_walker walker = {
> > +		.cb		= stage2_reduce_range_walker,
> > +		.flags		= KVM_PGTABLE_WALK_LEAF,
> > +		.arg		= &data,
> > +	};
> > +	int ret;
> > +
> > +	data.attr = stage2_get_prot_attr(prot) & KVM_PTE_LEAF_S2_COMPAT_MASK;
> > +	if (!data.attr)
> > +		return -EINVAL;
> 
> (this will need updating based on the other discussion we had)

Ack.

> > +	/* Reduce the kvm_mem_range to a granule size */
> > +	ret = __stage2_reduce_range(&data, range->end);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/* Walk the range to check permissions and reduce further if needed */
> > +	do {
> > +		ret = kvm_pgtable_walk(pgt, range->start, range->end, &walker);
> 
> (we spent some time debugging an issue here and you spotted that you're
> passing range->end instead of the size ;)

Yep, I have the fix applied locally, and ready to fly in v4 :)

> > +	} while (ret == -EAGAIN);
> 
> I'm a bit nervous about this loop -- what guarantees forward progress here?
> Can we return to the host after a few tries instead?

-EAGAIN only happens when we've been able to successfully reduce the
range to a potentially valid granule size. That can't happen infinitely.

We're guaranteed to fail when trying to reduce the range to a
granularity smaller than PAGE_SIZE (the -EINVAL case of
__stage2_reduce_range), which is indicative of a host memory abort in a
page it should not access (because marked PROT_NONE for instance).

Cheers,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-03-05 15:04 UTC|newest]

Thread overview: 192+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-02 14:59 [PATCH v3 00/32] KVM: arm64: A stage 2 for the host Quentin Perret
2021-03-02 14:59 ` Quentin Perret
2021-03-02 14:59 ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 01/32] arm64: lib: Annotate {clear,copy}_page() as position-independent Quentin Perret
2021-03-02 14:59   ` [PATCH v3 01/32] arm64: lib: Annotate {clear, copy}_page() " Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 02/32] KVM: arm64: Link position-independent string routines into .hyp.text Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 03/32] arm64: kvm: Add standalone ticket spinlock implementation for use at hyp Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 04/32] KVM: arm64: Initialize kvm_nvhe_init_params early Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 13:39   ` Will Deacon
2021-03-04 13:39     ` Will Deacon
2021-03-04 13:39     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 05/32] KVM: arm64: Avoid free_page() in page-table allocator Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 06/32] KVM: arm64: Factor memory allocation out of pgtable.c Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 14:06   ` Will Deacon
2021-03-04 14:06     ` Will Deacon
2021-03-04 14:06     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 07/32] KVM: arm64: Introduce a BSS section for use at Hyp Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 14:09   ` Will Deacon
2021-03-04 14:09     ` Will Deacon
2021-03-04 14:09     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 08/32] KVM: arm64: Make kvm_call_hyp() a function call " Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 09/32] KVM: arm64: Allow using kvm_nvhe_sym() in hyp code Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 10/32] KVM: arm64: Introduce an early Hyp page allocator Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 14:38   ` Will Deacon
2021-03-04 14:38     ` Will Deacon
2021-03-04 14:38     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 11/32] KVM: arm64: Stub CONFIG_DEBUG_LIST at Hyp Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 12/32] KVM: arm64: Introduce a Hyp buddy page allocator Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 15:30   ` Will Deacon
2021-03-04 15:30     ` Will Deacon
2021-03-04 15:30     ` Will Deacon
2021-03-04 15:49     ` Quentin Perret
2021-03-04 15:49       ` Quentin Perret
2021-03-04 15:49       ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 13/32] KVM: arm64: Enable access to sanitized CPU features at EL2 Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 14/32] KVM: arm64: Factor out vector address calculation Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 15/32] KVM: arm64: Prepare the creation of s1 mappings at EL2 Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 18:47   ` Will Deacon
2021-03-04 18:47     ` Will Deacon
2021-03-04 18:47     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 16/32] KVM: arm64: Elevate hypervisor mappings creation " Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 19:25   ` Will Deacon
2021-03-04 19:25     ` Will Deacon
2021-03-04 19:25     ` Will Deacon
2021-03-05  9:14     ` Quentin Perret
2021-03-05  9:14       ` Quentin Perret
2021-03-05  9:14       ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 17/32] KVM: arm64: Use kvm_arch for stage 2 pgtable Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 18/32] KVM: arm64: Use kvm_arch in kvm_s2_mmu Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 19/32] KVM: arm64: Set host stage 2 using kvm_nvhe_init_params Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 20/32] KVM: arm64: Refactor kvm_arm_setup_stage2() Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 19:35   ` Will Deacon
2021-03-04 19:35     ` Will Deacon
2021-03-04 19:35     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 21/32] KVM: arm64: Refactor __load_guest_stage2() Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 22/32] KVM: arm64: Refactor __populate_fault_info() Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 19:39   ` Will Deacon
2021-03-04 19:39     ` Will Deacon
2021-03-04 19:39     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 23/32] KVM: arm64: Make memcache anonymous in pgtable allocator Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 19:44   ` Will Deacon
2021-03-04 19:44     ` Will Deacon
2021-03-04 19:44     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 24/32] KVM: arm64: Reserve memory for host stage 2 Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 19:49   ` Will Deacon
2021-03-04 19:49     ` Will Deacon
2021-03-04 19:49     ` Will Deacon
2021-03-05  9:17     ` Quentin Perret
2021-03-05  9:17       ` Quentin Perret
2021-03-05  9:17       ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 25/32] KVM: arm64: Sort the hypervisor memblocks Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 19:51   ` Will Deacon
2021-03-04 19:51     ` Will Deacon
2021-03-04 19:51     ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 26/32] KVM: arm64: Introduce PROT_NONE mappings for stage 2 Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 20:00   ` Will Deacon
2021-03-04 20:00     ` Will Deacon
2021-03-04 20:00     ` Will Deacon
2021-03-05  9:52     ` Quentin Perret
2021-03-05  9:52       ` Quentin Perret
2021-03-05  9:52       ` Quentin Perret
2021-03-05 19:03       ` Will Deacon
2021-03-05 19:03         ` Will Deacon
2021-03-05 19:03         ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 27/32] KVM: arm64: Refactor stage2_map_set_prot_attr() Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-04 20:03   ` Will Deacon
2021-03-04 20:03     ` Will Deacon
2021-03-04 20:03     ` Will Deacon
2021-03-05  9:18     ` Quentin Perret
2021-03-05  9:18       ` Quentin Perret
2021-03-05  9:18       ` Quentin Perret
2021-03-02 14:59 ` [PATCH v3 28/32] KVM: arm64: Add kvm_pgtable_stage2_idmap_greedy() Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-05 14:39   ` Will Deacon
2021-03-05 14:39     ` Will Deacon
2021-03-05 14:39     ` Will Deacon
2021-03-05 15:03     ` Quentin Perret [this message]
2021-03-05 15:03       ` Quentin Perret
2021-03-05 15:03       ` Quentin Perret
2021-03-05 16:59       ` Will Deacon
2021-03-05 16:59         ` Will Deacon
2021-03-05 16:59         ` Will Deacon
2021-03-02 14:59 ` [PATCH v3 29/32] KVM: arm64: Wrap the host with a stage 2 Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-02 14:59   ` Quentin Perret
2021-03-05 19:29   ` Will Deacon
2021-03-05 19:29     ` Will Deacon
2021-03-05 19:29     ` Will Deacon
2021-03-08  9:22     ` Quentin Perret
2021-03-08  9:22       ` Quentin Perret
2021-03-08  9:22       ` Quentin Perret
2021-03-08 12:46       ` Will Deacon
2021-03-08 12:46         ` Will Deacon
2021-03-08 12:46         ` Will Deacon
2021-03-08 13:38         ` Quentin Perret
2021-03-08 13:38           ` Quentin Perret
2021-03-08 13:38           ` Quentin Perret
2021-03-08 13:52           ` Will Deacon
2021-03-08 13:52             ` Will Deacon
2021-03-08 13:52             ` Will Deacon
2021-03-02 15:00 ` [PATCH v3 30/32] KVM: arm64: Page-align the .hyp sections Quentin Perret
2021-03-02 15:00   ` Quentin Perret
2021-03-02 15:00   ` Quentin Perret
2021-03-04 20:05   ` Will Deacon
2021-03-04 20:05     ` Will Deacon
2021-03-04 20:05     ` Will Deacon
2021-03-02 15:00 ` [PATCH v3 31/32] KVM: arm64: Disable PMU support in protected mode Quentin Perret
2021-03-02 15:00   ` Quentin Perret
2021-03-02 15:00   ` Quentin Perret
2021-03-05 19:02   ` Will Deacon
2021-03-05 19:02     ` Will Deacon
2021-03-05 19:02     ` Will Deacon
2021-03-02 15:00 ` [PATCH v3 32/32] KVM: arm64: Protect the .hyp sections from the host Quentin Perret
2021-03-02 15:00   ` Quentin Perret
2021-03-02 15:00   ` Quentin Perret
2021-03-05 19:13   ` Will Deacon
2021-03-05 19:13     ` Will Deacon
2021-03-05 19:13     ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YEJISCQOHNbs363+@google.com \
    --to=qperret@google.com \
    --cc=android-kvm@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=dbrazdil@google.com \
    --cc=james.morse@arm.com \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kernel-team@android.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mate.toth-pal@arm.com \
    --cc=maz@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.