All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Jones <drjones@redhat.com>
To: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, will@kernel.org,
	qperret@google.com, dbrazdil@google.com,
	Steven Price <steven.price@arm.com>,
	Fuad Tabba <tabba@google.com>,
	Srivatsa Vaddagiri <vatsa@codeaurora.org>,
	Shanker R Donthineni <sdonthineni@nvidia.com>,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	kernel-team@android.com
Subject: Re: [PATCH v2 03/16] KVM: arm64: Turn kvm_pgtable_stage2_set_owner into kvm_pgtable_stage2_annotate
Date: Wed, 6 Oct 2021 13:02:11 +0200	[thread overview]
Message-ID: <20211006110211.y6kzmjlzgardmwif@gator.home> (raw)
In-Reply-To: <20211004174849.2831548-4-maz@kernel.org>

On Mon, Oct 04, 2021 at 06:48:36PM +0100, Marc Zyngier wrote:
> kvm_pgtable_stage2_set_owner() could be generalised into a way
> to store up to 63 bits in the page tables, as long as we don't
> set bit 0.
> 
> Let's just do that.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h          | 12 ++++++-----
>  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c         | 11 ++++------
>  arch/arm64/kvm/hyp/nvhe/setup.c               | 10 +++++++++-
>  arch/arm64/kvm/hyp/pgtable.c                  | 20 ++++++-------------
>  5 files changed, 27 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index 027783829584..d4d3ae0b5edb 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -329,14 +329,16 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  			   void *mc);
>  
>  /**
> - * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space to
> - *				    track ownership.
> + * kvm_pgtable_stage2_annotate() - Unmap and annotate pages in the IPA space
> + *				   to track ownership (and more).
>   * @pgt:	Page-table structure initialised by kvm_pgtable_stage2_init*().
>   * @addr:	Base intermediate physical address to annotate.
>   * @size:	Size of the annotated range.
>   * @mc:		Cache of pre-allocated and zeroed memory from which to allocate
>   *		page-table pages.
> - * @owner_id:	Unique identifier for the owner of the page.
> + * @annotation:	A 63 bit value that will be stored in the page tables.
> + *		@annotation[0] must be 0, and @annotation[63:1] is stored
> + *		in the page tables.
>   *
>   * By default, all page-tables are owned by identifier 0. This function can be
>   * used to mark portions of the IPA space as owned by other entities. When a
> @@ -345,8 +347,8 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>   *
>   * Return: 0 on success, negative error code on failure.
>   */
> -int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
> -				 void *mc, u8 owner_id);
> +int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +				void *mc, kvm_pte_t annotation);
>  
>  /**
>   * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 page-table.
> diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> index b58c910babaf..9d2ca173ea9a 100644
> --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> @@ -53,7 +53,7 @@ int __pkvm_host_share_hyp(u64 pfn);
>  
>  bool addr_is_memory(phys_addr_t phys);
>  int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot);
> -int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id);
> +int host_stage2_annotate_locked(phys_addr_t addr, u64 size, kvm_pte_t owner_id);
>  int kvm_host_prepare_stage2(void *pgt_pool_base);
>  void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index bacd493a4eac..8cd0c3bdb911 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -286,17 +286,14 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
>  int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
>  			     enum kvm_pgtable_prot prot)
>  {
> -	hyp_assert_lock_held(&host_kvm.lock);
> -
>  	return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot);
>  }
>  
> -int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
> +int host_stage2_annotate_locked(phys_addr_t addr, u64 size,
> +				kvm_pte_t annotation)
>  {
> -	hyp_assert_lock_held(&host_kvm.lock);

Hi Marc,

Why are the lock asserts getting dropped?

> -
> -	return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_kvm.pgt,
> -			       addr, size, &host_s2_pool, owner_id);
> +	return host_stage2_try(kvm_pgtable_stage2_annotate, &host_kvm.pgt,
> +			       addr, size, &host_s2_pool, annotation);
>  }
>  
>  static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot)
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 57c27846320f..d489fb9b8c29 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -159,6 +159,13 @@ static void hpool_put_page(void *addr)
>  	hyp_put_page(&hpool, addr);
>  }
>  
> +#define KVM_INVALID_PTE_OWNER_MASK	GENMASK(9, 2)
> +
> +static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
> +{
> +	return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
> +}
> +
>  static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
>  					 kvm_pte_t *ptep,
>  					 enum kvm_pgtable_walk_flags flag,
> @@ -186,7 +193,8 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
>  	state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte));
>  	switch (state) {
>  	case PKVM_PAGE_OWNED:
> -		return host_stage2_set_owner_locked(phys, PAGE_SIZE, pkvm_hyp_id);
> +		return host_stage2_annotate_locked(phys, PAGE_SIZE,
> +						   kvm_init_invalid_leaf_owner(pkvm_hyp_id));
>  	case PKVM_PAGE_SHARED_OWNED:
>  		prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED);
>  		break;
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 6bbfd952f0c5..4201e512da48 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -46,9 +46,6 @@
>  					 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
>  					 KVM_PTE_LEAF_ATTR_HI_S2_XN)
>  
> -#define KVM_INVALID_PTE_OWNER_MASK	GENMASK(9, 2)
> -#define KVM_MAX_OWNER_ID		1
> -
>  struct kvm_pgtable_walk_data {
>  	struct kvm_pgtable		*pgt;
>  	struct kvm_pgtable_walker	*walker;
> @@ -167,11 +164,6 @@ static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
>  	return pte;
>  }
>  
> -static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
> -{
> -	return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
> -}
> -
>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
>  				  u32 level, kvm_pte_t *ptep,
>  				  enum kvm_pgtable_walk_flags flag)
> @@ -503,7 +495,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
>  struct stage2_map_data {
>  	u64				phys;
>  	kvm_pte_t			attr;
> -	u8				owner_id;
> +	u64				annotation;
>  
>  	kvm_pte_t			*anchor;
>  	kvm_pte_t			*childp;
> @@ -670,7 +662,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  	if (kvm_phys_is_valid(phys))
>  		new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>  	else
> -		new = kvm_init_invalid_leaf_owner(data->owner_id);
> +		new = data->annotation;
>  
>  	if (stage2_pte_is_counted(old)) {
>  		/*
> @@ -859,8 +851,8 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  	return ret;
>  }
>  
> -int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
> -				 void *mc, u8 owner_id)
> +int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +				void *mc, kvm_pte_t annotation)
>  {
>  	int ret;
>  	struct stage2_map_data map_data = {
> @@ -868,8 +860,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		.mmu		= pgt->mmu,
>  		.memcache	= mc,
>  		.mm_ops		= pgt->mm_ops,
> -		.owner_id	= owner_id,
>  		.force_pte	= true,
> +		.annotation	= annotation,
>  	};
>  	struct kvm_pgtable_walker walker = {
>  		.cb		= stage2_map_walker,
> @@ -879,7 +871,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		.arg		= &map_data,
>  	};
>  
> -	if (owner_id > KVM_MAX_OWNER_ID)
> +	if (annotation & PTE_VALID)
>  		return -EINVAL;
>  
>  	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> -- 
> 2.30.2
>

Thanks,
drew


WARNING: multiple messages have this Message-ID (diff)
From: Andrew Jones <drjones@redhat.com>
To: Marc Zyngier <maz@kernel.org>
Cc: kernel-team@android.com, kvm@vger.kernel.org,
	Srivatsa Vaddagiri <vatsa@codeaurora.org>,
	linux-kernel@vger.kernel.org, Steven Price <steven.price@arm.com>,
	Shanker R Donthineni <sdonthineni@nvidia.com>,
	will@kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v2 03/16] KVM: arm64: Turn kvm_pgtable_stage2_set_owner into kvm_pgtable_stage2_annotate
Date: Wed, 6 Oct 2021 13:02:11 +0200	[thread overview]
Message-ID: <20211006110211.y6kzmjlzgardmwif@gator.home> (raw)
In-Reply-To: <20211004174849.2831548-4-maz@kernel.org>

On Mon, Oct 04, 2021 at 06:48:36PM +0100, Marc Zyngier wrote:
> kvm_pgtable_stage2_set_owner() could be generalised into a way
> to store up to 63 bits in the page tables, as long as we don't
> set bit 0.
> 
> Let's just do that.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h          | 12 ++++++-----
>  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c         | 11 ++++------
>  arch/arm64/kvm/hyp/nvhe/setup.c               | 10 +++++++++-
>  arch/arm64/kvm/hyp/pgtable.c                  | 20 ++++++-------------
>  5 files changed, 27 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index 027783829584..d4d3ae0b5edb 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -329,14 +329,16 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  			   void *mc);
>  
>  /**
> - * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space to
> - *				    track ownership.
> + * kvm_pgtable_stage2_annotate() - Unmap and annotate pages in the IPA space
> + *				   to track ownership (and more).
>   * @pgt:	Page-table structure initialised by kvm_pgtable_stage2_init*().
>   * @addr:	Base intermediate physical address to annotate.
>   * @size:	Size of the annotated range.
>   * @mc:		Cache of pre-allocated and zeroed memory from which to allocate
>   *		page-table pages.
> - * @owner_id:	Unique identifier for the owner of the page.
> + * @annotation:	A 63 bit value that will be stored in the page tables.
> + *		@annotation[0] must be 0, and @annotation[63:1] is stored
> + *		in the page tables.
>   *
>   * By default, all page-tables are owned by identifier 0. This function can be
>   * used to mark portions of the IPA space as owned by other entities. When a
> @@ -345,8 +347,8 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>   *
>   * Return: 0 on success, negative error code on failure.
>   */
> -int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
> -				 void *mc, u8 owner_id);
> +int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +				void *mc, kvm_pte_t annotation);
>  
>  /**
>   * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 page-table.
> diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> index b58c910babaf..9d2ca173ea9a 100644
> --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> @@ -53,7 +53,7 @@ int __pkvm_host_share_hyp(u64 pfn);
>  
>  bool addr_is_memory(phys_addr_t phys);
>  int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot);
> -int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id);
> +int host_stage2_annotate_locked(phys_addr_t addr, u64 size, kvm_pte_t owner_id);
>  int kvm_host_prepare_stage2(void *pgt_pool_base);
>  void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index bacd493a4eac..8cd0c3bdb911 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -286,17 +286,14 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
>  int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
>  			     enum kvm_pgtable_prot prot)
>  {
> -	hyp_assert_lock_held(&host_kvm.lock);
> -
>  	return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot);
>  }
>  
> -int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
> +int host_stage2_annotate_locked(phys_addr_t addr, u64 size,
> +				kvm_pte_t annotation)
>  {
> -	hyp_assert_lock_held(&host_kvm.lock);

Hi Marc,

Why are the lock asserts getting dropped?

> -
> -	return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_kvm.pgt,
> -			       addr, size, &host_s2_pool, owner_id);
> +	return host_stage2_try(kvm_pgtable_stage2_annotate, &host_kvm.pgt,
> +			       addr, size, &host_s2_pool, annotation);
>  }
>  
>  static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot)
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 57c27846320f..d489fb9b8c29 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -159,6 +159,13 @@ static void hpool_put_page(void *addr)
>  	hyp_put_page(&hpool, addr);
>  }
>  
> +#define KVM_INVALID_PTE_OWNER_MASK	GENMASK(9, 2)
> +
> +static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
> +{
> +	return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
> +}
> +
>  static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
>  					 kvm_pte_t *ptep,
>  					 enum kvm_pgtable_walk_flags flag,
> @@ -186,7 +193,8 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
>  	state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte));
>  	switch (state) {
>  	case PKVM_PAGE_OWNED:
> -		return host_stage2_set_owner_locked(phys, PAGE_SIZE, pkvm_hyp_id);
> +		return host_stage2_annotate_locked(phys, PAGE_SIZE,
> +						   kvm_init_invalid_leaf_owner(pkvm_hyp_id));
>  	case PKVM_PAGE_SHARED_OWNED:
>  		prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED);
>  		break;
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 6bbfd952f0c5..4201e512da48 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -46,9 +46,6 @@
>  					 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
>  					 KVM_PTE_LEAF_ATTR_HI_S2_XN)
>  
> -#define KVM_INVALID_PTE_OWNER_MASK	GENMASK(9, 2)
> -#define KVM_MAX_OWNER_ID		1
> -
>  struct kvm_pgtable_walk_data {
>  	struct kvm_pgtable		*pgt;
>  	struct kvm_pgtable_walker	*walker;
> @@ -167,11 +164,6 @@ static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
>  	return pte;
>  }
>  
> -static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
> -{
> -	return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
> -}
> -
>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
>  				  u32 level, kvm_pte_t *ptep,
>  				  enum kvm_pgtable_walk_flags flag)
> @@ -503,7 +495,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
>  struct stage2_map_data {
>  	u64				phys;
>  	kvm_pte_t			attr;
> -	u8				owner_id;
> +	u64				annotation;
>  
>  	kvm_pte_t			*anchor;
>  	kvm_pte_t			*childp;
> @@ -670,7 +662,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  	if (kvm_phys_is_valid(phys))
>  		new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>  	else
> -		new = kvm_init_invalid_leaf_owner(data->owner_id);
> +		new = data->annotation;
>  
>  	if (stage2_pte_is_counted(old)) {
>  		/*
> @@ -859,8 +851,8 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  	return ret;
>  }
>  
> -int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
> -				 void *mc, u8 owner_id)
> +int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +				void *mc, kvm_pte_t annotation)
>  {
>  	int ret;
>  	struct stage2_map_data map_data = {
> @@ -868,8 +860,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		.mmu		= pgt->mmu,
>  		.memcache	= mc,
>  		.mm_ops		= pgt->mm_ops,
> -		.owner_id	= owner_id,
>  		.force_pte	= true,
> +		.annotation	= annotation,
>  	};
>  	struct kvm_pgtable_walker walker = {
>  		.cb		= stage2_map_walker,
> @@ -879,7 +871,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		.arg		= &map_data,
>  	};
>  
> -	if (owner_id > KVM_MAX_OWNER_ID)
> +	if (annotation & PTE_VALID)
>  		return -EINVAL;
>  
>  	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> -- 
> 2.30.2
>

Thanks,
drew

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Andrew Jones <drjones@redhat.com>
To: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, will@kernel.org,
	qperret@google.com, dbrazdil@google.com,
	Steven Price <steven.price@arm.com>,
	Fuad Tabba <tabba@google.com>,
	Srivatsa Vaddagiri <vatsa@codeaurora.org>,
	Shanker R Donthineni <sdonthineni@nvidia.com>,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	kernel-team@android.com
Subject: Re: [PATCH v2 03/16] KVM: arm64: Turn kvm_pgtable_stage2_set_owner into kvm_pgtable_stage2_annotate
Date: Wed, 6 Oct 2021 13:02:11 +0200	[thread overview]
Message-ID: <20211006110211.y6kzmjlzgardmwif@gator.home> (raw)
In-Reply-To: <20211004174849.2831548-4-maz@kernel.org>

On Mon, Oct 04, 2021 at 06:48:36PM +0100, Marc Zyngier wrote:
> kvm_pgtable_stage2_set_owner() could be generalised into a way
> to store up to 63 bits in the page tables, as long as we don't
> set bit 0.
> 
> Let's just do that.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h          | 12 ++++++-----
>  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c         | 11 ++++------
>  arch/arm64/kvm/hyp/nvhe/setup.c               | 10 +++++++++-
>  arch/arm64/kvm/hyp/pgtable.c                  | 20 ++++++-------------
>  5 files changed, 27 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index 027783829584..d4d3ae0b5edb 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -329,14 +329,16 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  			   void *mc);
>  
>  /**
> - * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space to
> - *				    track ownership.
> + * kvm_pgtable_stage2_annotate() - Unmap and annotate pages in the IPA space
> + *				   to track ownership (and more).
>   * @pgt:	Page-table structure initialised by kvm_pgtable_stage2_init*().
>   * @addr:	Base intermediate physical address to annotate.
>   * @size:	Size of the annotated range.
>   * @mc:		Cache of pre-allocated and zeroed memory from which to allocate
>   *		page-table pages.
> - * @owner_id:	Unique identifier for the owner of the page.
> + * @annotation:	A 63 bit value that will be stored in the page tables.
> + *		@annotation[0] must be 0, and @annotation[63:1] is stored
> + *		in the page tables.
>   *
>   * By default, all page-tables are owned by identifier 0. This function can be
>   * used to mark portions of the IPA space as owned by other entities. When a
> @@ -345,8 +347,8 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>   *
>   * Return: 0 on success, negative error code on failure.
>   */
> -int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
> -				 void *mc, u8 owner_id);
> +int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +				void *mc, kvm_pte_t annotation);
>  
>  /**
>   * kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 page-table.
> diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> index b58c910babaf..9d2ca173ea9a 100644
> --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> @@ -53,7 +53,7 @@ int __pkvm_host_share_hyp(u64 pfn);
>  
>  bool addr_is_memory(phys_addr_t phys);
>  int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot);
> -int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id);
> +int host_stage2_annotate_locked(phys_addr_t addr, u64 size, kvm_pte_t owner_id);
>  int kvm_host_prepare_stage2(void *pgt_pool_base);
>  void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index bacd493a4eac..8cd0c3bdb911 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -286,17 +286,14 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
>  int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
>  			     enum kvm_pgtable_prot prot)
>  {
> -	hyp_assert_lock_held(&host_kvm.lock);
> -
>  	return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot);
>  }
>  
> -int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
> +int host_stage2_annotate_locked(phys_addr_t addr, u64 size,
> +				kvm_pte_t annotation)
>  {
> -	hyp_assert_lock_held(&host_kvm.lock);

Hi Marc,

Why are the lock asserts getting dropped?

> -
> -	return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_kvm.pgt,
> -			       addr, size, &host_s2_pool, owner_id);
> +	return host_stage2_try(kvm_pgtable_stage2_annotate, &host_kvm.pgt,
> +			       addr, size, &host_s2_pool, annotation);
>  }
>  
>  static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot)
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 57c27846320f..d489fb9b8c29 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -159,6 +159,13 @@ static void hpool_put_page(void *addr)
>  	hyp_put_page(&hpool, addr);
>  }
>  
> +#define KVM_INVALID_PTE_OWNER_MASK	GENMASK(9, 2)
> +
> +static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
> +{
> +	return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
> +}
> +
>  static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
>  					 kvm_pte_t *ptep,
>  					 enum kvm_pgtable_walk_flags flag,
> @@ -186,7 +193,8 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
>  	state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte));
>  	switch (state) {
>  	case PKVM_PAGE_OWNED:
> -		return host_stage2_set_owner_locked(phys, PAGE_SIZE, pkvm_hyp_id);
> +		return host_stage2_annotate_locked(phys, PAGE_SIZE,
> +						   kvm_init_invalid_leaf_owner(pkvm_hyp_id));
>  	case PKVM_PAGE_SHARED_OWNED:
>  		prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED);
>  		break;
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 6bbfd952f0c5..4201e512da48 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -46,9 +46,6 @@
>  					 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
>  					 KVM_PTE_LEAF_ATTR_HI_S2_XN)
>  
> -#define KVM_INVALID_PTE_OWNER_MASK	GENMASK(9, 2)
> -#define KVM_MAX_OWNER_ID		1
> -
>  struct kvm_pgtable_walk_data {
>  	struct kvm_pgtable		*pgt;
>  	struct kvm_pgtable_walker	*walker;
> @@ -167,11 +164,6 @@ static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
>  	return pte;
>  }
>  
> -static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
> -{
> -	return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
> -}
> -
>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
>  				  u32 level, kvm_pte_t *ptep,
>  				  enum kvm_pgtable_walk_flags flag)
> @@ -503,7 +495,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
>  struct stage2_map_data {
>  	u64				phys;
>  	kvm_pte_t			attr;
> -	u8				owner_id;
> +	u64				annotation;
>  
>  	kvm_pte_t			*anchor;
>  	kvm_pte_t			*childp;
> @@ -670,7 +662,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  	if (kvm_phys_is_valid(phys))
>  		new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>  	else
> -		new = kvm_init_invalid_leaf_owner(data->owner_id);
> +		new = data->annotation;
>  
>  	if (stage2_pte_is_counted(old)) {
>  		/*
> @@ -859,8 +851,8 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  	return ret;
>  }
>  
> -int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
> -				 void *mc, u8 owner_id)
> +int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
> +				void *mc, kvm_pte_t annotation)
>  {
>  	int ret;
>  	struct stage2_map_data map_data = {
> @@ -868,8 +860,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		.mmu		= pgt->mmu,
>  		.memcache	= mc,
>  		.mm_ops		= pgt->mm_ops,
> -		.owner_id	= owner_id,
>  		.force_pte	= true,
> +		.annotation	= annotation,
>  	};
>  	struct kvm_pgtable_walker walker = {
>  		.cb		= stage2_map_walker,
> @@ -879,7 +871,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  		.arg		= &map_data,
>  	};
>  
> -	if (owner_id > KVM_MAX_OWNER_ID)
> +	if (annotation & PTE_VALID)
>  		return -EINVAL;
>  
>  	ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> -- 
> 2.30.2
>

Thanks,
drew


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-10-06 11:02 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 17:48 [PATCH v2 00/16] KVM: arm64: MMIO guard PV services Marc Zyngier
2021-10-04 17:48 ` Marc Zyngier
2021-10-04 17:48 ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 01/16] KVM: arm64: Generalise VM features into a set of flags Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-06 10:24   ` Andrew Jones
2021-10-06 10:24     ` Andrew Jones
2021-10-06 10:24     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 02/16] KVM: arm64: Check for PTE valitity when checking for executable/cacheable Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-06 10:41   ` Andrew Jones
2021-10-06 10:41     ` Andrew Jones
2021-10-06 10:41     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 03/16] KVM: arm64: Turn kvm_pgtable_stage2_set_owner into kvm_pgtable_stage2_annotate Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-06 11:02   ` Andrew Jones [this message]
2021-10-06 11:02     ` Andrew Jones
2021-10-06 11:02     ` Andrew Jones
2021-10-06 11:22     ` Andrew Jones
2021-10-06 11:22       ` Andrew Jones
2021-10-06 11:22       ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 04/16] KVM: arm64: Add MMIO checking infrastructure Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-06 11:37   ` Andrew Jones
2021-10-06 11:37     ` Andrew Jones
2021-10-06 11:37     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 05/16] KVM: arm64: Plumb MMIO checking into the fault handling Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 06/16] KVM: arm64: Force a full unmap on vpcu reinit Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-07 13:49   ` Andrew Jones
2021-10-07 13:49     ` Andrew Jones
2021-10-07 13:49     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 07/16] KVM: arm64: Wire MMIO guard hypercalls Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-07 13:53   ` Andrew Jones
2021-10-07 13:53     ` Andrew Jones
2021-10-07 13:53     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 08/16] KVM: arm64: Add tracepoint for failed MMIO guard check Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 09/16] KVM: arm64: Advertise a capability for MMIO guard Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-07 14:22   ` Andrew Jones
2021-10-07 14:22     ` Andrew Jones
2021-10-07 14:22     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 10/16] KVM: arm64: Add some documentation for the MMIO guard feature Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-07 14:38   ` Andrew Jones
2021-10-07 14:38     ` Andrew Jones
2021-10-07 14:38     ` Andrew Jones
2021-10-07 14:42   ` Andrew Jones
2021-10-07 14:42     ` Andrew Jones
2021-10-07 14:42     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 11/16] firmware/smccc: Call arch-specific hook on discovering KVM services Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 12/16] mm/vmalloc: Add arch-specific callbacks to track io{remap,unmap} physical pages Marc Zyngier
2021-10-04 17:48   ` [PATCH v2 12/16] mm/vmalloc: Add arch-specific callbacks to track io{remap, unmap} " Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 13/16] arm64: Implement ioremap/iounmap hooks calling into KVM's MMIO guard Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-07 15:36   ` Andrew Jones
2021-10-07 15:36     ` Andrew Jones
2021-10-07 15:36     ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 14/16] arm64: Enroll into KVM's MMIO guard if required Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 15/16] arm64: Add a helper to retrieve the PTE of a fixmap Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 16/16] arm64: Register earlycon fixmap with the MMIO guard Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-04 17:48   ` Marc Zyngier
2021-10-07 15:52 ` [PATCH v2 00/16] KVM: arm64: MMIO guard PV services Andrew Jones
2021-10-07 15:52   ` Andrew Jones
2021-10-07 15:52   ` Andrew Jones

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211006110211.y6kzmjlzgardmwif@gator.home \
    --to=drjones@redhat.com \
    --cc=alexandru.elisei@arm.com \
    --cc=dbrazdil@google.com \
    --cc=james.morse@arm.com \
    --cc=kernel-team@android.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=qperret@google.com \
    --cc=sdonthineni@nvidia.com \
    --cc=steven.price@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=vatsa@codeaurora.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.