All of lore.kernel.org
 help / color / mirror / Atom feed
From: Keqian Zhu <zhukeqian1@huawei.com>
To: Marc Zyngier <maz@kernel.org>
Cc: <linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>, <kvm@vger.kernel.org>,
	<kvmarm@lists.cs.columbia.edu>, Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	James Morse <james.morse@arm.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Suzuki K Poulose" <suzuki.poulose@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexios Zavras <alexios.zavras@intel.com>,
	<wanghaibin.wang@huawei.com>, <jiangkunkun@huawei.com>
Subject: Re: [RFC PATCH] kvm: arm64: Try stage2 block mapping for host device MMIO
Date: Thu, 11 Mar 2021 22:28:17 +0800	[thread overview]
Message-ID: <e2a36913-2ded-71ff-d3ed-f7f8d831447c@huawei.com> (raw)
In-Reply-To: <87y2euf5d2.wl-maz@kernel.org>

Hi Marc,

On 2021/3/11 16:43, Marc Zyngier wrote:
> Digging this patch back from my Inbox...
Yeah, thanks ;-)

> 
> On Fri, 22 Jan 2021 08:36:50 +0000,
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
>>
>> The MMIO region of a device maybe huge (GB level), try to use block
>> mapping in stage2 to speedup both map and unmap.
>>
>> Especially for unmap, it performs TLBI right after each invalidation
>> of PTE. If all mapping is of PAGE_SIZE, it takes much time to handle
>> GB level range.
>>
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>  arch/arm64/include/asm/kvm_pgtable.h | 11 +++++++++++
>>  arch/arm64/kvm/hyp/pgtable.c         | 15 +++++++++++++++
>>  arch/arm64/kvm/mmu.c                 | 12 ++++++++----
>>  3 files changed, 34 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
>> index 52ab38db04c7..2266ac45f10c 100644
>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>> @@ -82,6 +82,17 @@ struct kvm_pgtable_walker {
>>  	const enum kvm_pgtable_walk_flags	flags;
>>  };
>>  
>> +/**
>> + * kvm_supported_pgsize() - Get the max supported page size of a mapping.
>> + * @pgt:	Initialised page-table structure.
>> + * @addr:	Virtual address at which to place the mapping.
>> + * @end:	End virtual address of the mapping.
>> + * @phys:	Physical address of the memory to map.
>> + *
>> + * The smallest return value is PAGE_SIZE.
>> + */
>> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys);
>> +
>>  /**
>>   * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table.
>>   * @pgt:	Uninitialised page-table structure to initialise.
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index bdf8e55ed308..ab11609b9b13 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -81,6 +81,21 @@ static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
>>  	return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule);
>>  }
>>  
>> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys)
>> +{
>> +	u32 lvl;
>> +	u64 pgsize = PAGE_SIZE;
>> +
>> +	for (lvl = pgt->start_level; lvl < KVM_PGTABLE_MAX_LEVELS; lvl++) {
>> +		if (kvm_block_mapping_supported(addr, end, phys, lvl)) {
>> +			pgsize = kvm_granule_size(lvl);
>> +			break;
>> +		}
>> +	}
>> +
>> +	return pgsize;
>> +}
>> +
>>  static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level)
>>  {
>>  	u64 shift = kvm_granule_shift(level);
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 7d2257cc5438..80b403fc8e64 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -499,7 +499,8 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
>>  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>>  			  phys_addr_t pa, unsigned long size, bool writable)
>>  {
>> -	phys_addr_t addr;
>> +	phys_addr_t addr, end;
>> +	unsigned long pgsize;
>>  	int ret = 0;
>>  	struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, };
>>  	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>> @@ -509,21 +510,24 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>>  
>>  	size += offset_in_page(guest_ipa);
>>  	guest_ipa &= PAGE_MASK;
>> +	end = guest_ipa + size;
>>  
>> -	for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) {
>> +	for (addr = guest_ipa; addr < end; addr += pgsize) {
>>  		ret = kvm_mmu_topup_memory_cache(&cache,
>>  						 kvm_mmu_cache_min_pages(kvm));
>>  		if (ret)
>>  			break;
>>  
>> +		pgsize = kvm_supported_pgsize(pgt, addr, end, pa);
>> +
>>  		spin_lock(&kvm->mmu_lock);
>> -		ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot,
>> +		ret = kvm_pgtable_stage2_map(pgt, addr, pgsize, pa, prot,
>>  					     &cache);
>>  		spin_unlock(&kvm->mmu_lock);
>>  		if (ret)
>>  			break;
>>  
>> -		pa += PAGE_SIZE;
>> +		pa += pgsize;
>>  	}
>>  
>>  	kvm_mmu_free_memory_cache(&cache);
> 
> There is one issue with this patch, which is that it only does half
> the job. A VM_PFNMAP VMA can definitely be faulted in dynamically, and
> in that case we force this to be a page mapping. This conflicts with
> what you are doing here.
Oh yes, these two paths should keep a same mapping logic.

I try to search the "force_pte" and find out some discussion [1] between you and Christoffer.
And I failed to get a reason about forcing pte mapping for device MMIO region (expect that
we want to keep a same logic with the eager mapping path). So if you don't object to it, I
will try to implement block mapping for device MMIO in user_mem_abort().

> 
> There is also the fact that if we can map things on demand, why are we
> still mapping these MMIO regions ahead of time?
Indeed. Though this provides good *startup* performance for guest accessing MMIO, it's hard
to keep the two paths in sync. We can keep this minor optimization or delete it to avoid hard
maintenance, which one do you prefer?

BTW, could you please have a look at my another patch series[2] about HW/SW combined dirty log? ;)

Thanks,
Keqian

[1] https://patchwork.kernel.org/project/linux-arm-kernel/patch/20191211165651.7889-2-maz@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20210126124444.27136-1-zhukeqian1@huawei.com/


> 
> Thanks,
> 
> 	M.
> 

WARNING: multiple messages have this Message-ID (diff)
From: Keqian Zhu <zhukeqian1@huawei.com>
To: Marc Zyngier <maz@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	kvm@vger.kernel.org, Catalin Marinas <catalin.marinas@arm.com>,
	Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	linux-kernel@vger.kernel.org,
	Alexios Zavras <alexios.zavras@intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org,
	Robin Murphy <robin.murphy@arm.com>
Subject: Re: [RFC PATCH] kvm: arm64: Try stage2 block mapping for host device MMIO
Date: Thu, 11 Mar 2021 22:28:17 +0800	[thread overview]
Message-ID: <e2a36913-2ded-71ff-d3ed-f7f8d831447c@huawei.com> (raw)
In-Reply-To: <87y2euf5d2.wl-maz@kernel.org>

Hi Marc,

On 2021/3/11 16:43, Marc Zyngier wrote:
> Digging this patch back from my Inbox...
Yeah, thanks ;-)

> 
> On Fri, 22 Jan 2021 08:36:50 +0000,
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
>>
>> The MMIO region of a device maybe huge (GB level), try to use block
>> mapping in stage2 to speedup both map and unmap.
>>
>> Especially for unmap, it performs TLBI right after each invalidation
>> of PTE. If all mapping is of PAGE_SIZE, it takes much time to handle
>> GB level range.
>>
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>  arch/arm64/include/asm/kvm_pgtable.h | 11 +++++++++++
>>  arch/arm64/kvm/hyp/pgtable.c         | 15 +++++++++++++++
>>  arch/arm64/kvm/mmu.c                 | 12 ++++++++----
>>  3 files changed, 34 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
>> index 52ab38db04c7..2266ac45f10c 100644
>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>> @@ -82,6 +82,17 @@ struct kvm_pgtable_walker {
>>  	const enum kvm_pgtable_walk_flags	flags;
>>  };
>>  
>> +/**
>> + * kvm_supported_pgsize() - Get the max supported page size of a mapping.
>> + * @pgt:	Initialised page-table structure.
>> + * @addr:	Virtual address at which to place the mapping.
>> + * @end:	End virtual address of the mapping.
>> + * @phys:	Physical address of the memory to map.
>> + *
>> + * The smallest return value is PAGE_SIZE.
>> + */
>> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys);
>> +
>>  /**
>>   * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table.
>>   * @pgt:	Uninitialised page-table structure to initialise.
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index bdf8e55ed308..ab11609b9b13 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -81,6 +81,21 @@ static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
>>  	return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule);
>>  }
>>  
>> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys)
>> +{
>> +	u32 lvl;
>> +	u64 pgsize = PAGE_SIZE;
>> +
>> +	for (lvl = pgt->start_level; lvl < KVM_PGTABLE_MAX_LEVELS; lvl++) {
>> +		if (kvm_block_mapping_supported(addr, end, phys, lvl)) {
>> +			pgsize = kvm_granule_size(lvl);
>> +			break;
>> +		}
>> +	}
>> +
>> +	return pgsize;
>> +}
>> +
>>  static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level)
>>  {
>>  	u64 shift = kvm_granule_shift(level);
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 7d2257cc5438..80b403fc8e64 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -499,7 +499,8 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
>>  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>>  			  phys_addr_t pa, unsigned long size, bool writable)
>>  {
>> -	phys_addr_t addr;
>> +	phys_addr_t addr, end;
>> +	unsigned long pgsize;
>>  	int ret = 0;
>>  	struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, };
>>  	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>> @@ -509,21 +510,24 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>>  
>>  	size += offset_in_page(guest_ipa);
>>  	guest_ipa &= PAGE_MASK;
>> +	end = guest_ipa + size;
>>  
>> -	for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) {
>> +	for (addr = guest_ipa; addr < end; addr += pgsize) {
>>  		ret = kvm_mmu_topup_memory_cache(&cache,
>>  						 kvm_mmu_cache_min_pages(kvm));
>>  		if (ret)
>>  			break;
>>  
>> +		pgsize = kvm_supported_pgsize(pgt, addr, end, pa);
>> +
>>  		spin_lock(&kvm->mmu_lock);
>> -		ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot,
>> +		ret = kvm_pgtable_stage2_map(pgt, addr, pgsize, pa, prot,
>>  					     &cache);
>>  		spin_unlock(&kvm->mmu_lock);
>>  		if (ret)
>>  			break;
>>  
>> -		pa += PAGE_SIZE;
>> +		pa += pgsize;
>>  	}
>>  
>>  	kvm_mmu_free_memory_cache(&cache);
> 
> There is one issue with this patch, which is that it only does half
> the job. A VM_PFNMAP VMA can definitely be faulted in dynamically, and
> in that case we force this to be a page mapping. This conflicts with
> what you are doing here.
Oh yes, these two paths should keep a same mapping logic.

I try to search the "force_pte" and find out some discussion [1] between you and Christoffer.
And I failed to get a reason about forcing pte mapping for device MMIO region (expect that
we want to keep a same logic with the eager mapping path). So if you don't object to it, I
will try to implement block mapping for device MMIO in user_mem_abort().

> 
> There is also the fact that if we can map things on demand, why are we
> still mapping these MMIO regions ahead of time?
Indeed. Though this provides good *startup* performance for guest accessing MMIO, it's hard
to keep the two paths in sync. We can keep this minor optimization or delete it to avoid hard
maintenance, which one do you prefer?

BTW, could you please have a look at my another patch series[2] about HW/SW combined dirty log? ;)

Thanks,
Keqian

[1] https://patchwork.kernel.org/project/linux-arm-kernel/patch/20191211165651.7889-2-maz@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20210126124444.27136-1-zhukeqian1@huawei.com/


> 
> Thanks,
> 
> 	M.
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Keqian Zhu <zhukeqian1@huawei.com>
To: Marc Zyngier <maz@kernel.org>
Cc: <linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>, <kvm@vger.kernel.org>,
	<kvmarm@lists.cs.columbia.edu>, Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	James Morse <james.morse@arm.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Suzuki K Poulose" <suzuki.poulose@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexios Zavras <alexios.zavras@intel.com>,
	<wanghaibin.wang@huawei.com>, <jiangkunkun@huawei.com>
Subject: Re: [RFC PATCH] kvm: arm64: Try stage2 block mapping for host device MMIO
Date: Thu, 11 Mar 2021 22:28:17 +0800	[thread overview]
Message-ID: <e2a36913-2ded-71ff-d3ed-f7f8d831447c@huawei.com> (raw)
In-Reply-To: <87y2euf5d2.wl-maz@kernel.org>

Hi Marc,

On 2021/3/11 16:43, Marc Zyngier wrote:
> Digging this patch back from my Inbox...
Yeah, thanks ;-)

> 
> On Fri, 22 Jan 2021 08:36:50 +0000,
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
>>
>> The MMIO region of a device maybe huge (GB level), try to use block
>> mapping in stage2 to speedup both map and unmap.
>>
>> Especially for unmap, it performs TLBI right after each invalidation
>> of PTE. If all mapping is of PAGE_SIZE, it takes much time to handle
>> GB level range.
>>
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>  arch/arm64/include/asm/kvm_pgtable.h | 11 +++++++++++
>>  arch/arm64/kvm/hyp/pgtable.c         | 15 +++++++++++++++
>>  arch/arm64/kvm/mmu.c                 | 12 ++++++++----
>>  3 files changed, 34 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
>> index 52ab38db04c7..2266ac45f10c 100644
>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>> @@ -82,6 +82,17 @@ struct kvm_pgtable_walker {
>>  	const enum kvm_pgtable_walk_flags	flags;
>>  };
>>  
>> +/**
>> + * kvm_supported_pgsize() - Get the max supported page size of a mapping.
>> + * @pgt:	Initialised page-table structure.
>> + * @addr:	Virtual address at which to place the mapping.
>> + * @end:	End virtual address of the mapping.
>> + * @phys:	Physical address of the memory to map.
>> + *
>> + * The smallest return value is PAGE_SIZE.
>> + */
>> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys);
>> +
>>  /**
>>   * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table.
>>   * @pgt:	Uninitialised page-table structure to initialise.
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index bdf8e55ed308..ab11609b9b13 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -81,6 +81,21 @@ static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
>>  	return IS_ALIGNED(addr, granule) && IS_ALIGNED(phys, granule);
>>  }
>>  
>> +u64 kvm_supported_pgsize(struct kvm_pgtable *pgt, u64 addr, u64 end, u64 phys)
>> +{
>> +	u32 lvl;
>> +	u64 pgsize = PAGE_SIZE;
>> +
>> +	for (lvl = pgt->start_level; lvl < KVM_PGTABLE_MAX_LEVELS; lvl++) {
>> +		if (kvm_block_mapping_supported(addr, end, phys, lvl)) {
>> +			pgsize = kvm_granule_size(lvl);
>> +			break;
>> +		}
>> +	}
>> +
>> +	return pgsize;
>> +}
>> +
>>  static u32 kvm_pgtable_idx(struct kvm_pgtable_walk_data *data, u32 level)
>>  {
>>  	u64 shift = kvm_granule_shift(level);
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 7d2257cc5438..80b403fc8e64 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -499,7 +499,8 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
>>  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>>  			  phys_addr_t pa, unsigned long size, bool writable)
>>  {
>> -	phys_addr_t addr;
>> +	phys_addr_t addr, end;
>> +	unsigned long pgsize;
>>  	int ret = 0;
>>  	struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, };
>>  	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>> @@ -509,21 +510,24 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>>  
>>  	size += offset_in_page(guest_ipa);
>>  	guest_ipa &= PAGE_MASK;
>> +	end = guest_ipa + size;
>>  
>> -	for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) {
>> +	for (addr = guest_ipa; addr < end; addr += pgsize) {
>>  		ret = kvm_mmu_topup_memory_cache(&cache,
>>  						 kvm_mmu_cache_min_pages(kvm));
>>  		if (ret)
>>  			break;
>>  
>> +		pgsize = kvm_supported_pgsize(pgt, addr, end, pa);
>> +
>>  		spin_lock(&kvm->mmu_lock);
>> -		ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot,
>> +		ret = kvm_pgtable_stage2_map(pgt, addr, pgsize, pa, prot,
>>  					     &cache);
>>  		spin_unlock(&kvm->mmu_lock);
>>  		if (ret)
>>  			break;
>>  
>> -		pa += PAGE_SIZE;
>> +		pa += pgsize;
>>  	}
>>  
>>  	kvm_mmu_free_memory_cache(&cache);
> 
> There is one issue with this patch, which is that it only does half
> the job. A VM_PFNMAP VMA can definitely be faulted in dynamically, and
> in that case we force this to be a page mapping. This conflicts with
> what you are doing here.
Oh yes, these two paths should keep a same mapping logic.

I try to search the "force_pte" and find out some discussion [1] between you and Christoffer.
And I failed to get a reason about forcing pte mapping for device MMIO region (expect that
we want to keep a same logic with the eager mapping path). So if you don't object to it, I
will try to implement block mapping for device MMIO in user_mem_abort().

> 
> There is also the fact that if we can map things on demand, why are we
> still mapping these MMIO regions ahead of time?
Indeed. Though this provides good *startup* performance for guest accessing MMIO, it's hard
to keep the two paths in sync. We can keep this minor optimization or delete it to avoid hard
maintenance, which one do you prefer?

BTW, could you please have a look at my another patch series[2] about HW/SW combined dirty log? ;)

Thanks,
Keqian

[1] https://patchwork.kernel.org/project/linux-arm-kernel/patch/20191211165651.7889-2-maz@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20210126124444.27136-1-zhukeqian1@huawei.com/


> 
> Thanks,
> 
> 	M.
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-03-11 14:29 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-22  8:36 [RFC PATCH] kvm: arm64: Try stage2 block mapping for host device MMIO Keqian Zhu
2021-01-22  8:36 ` Keqian Zhu
2021-01-22  8:36 ` Keqian Zhu
2021-01-22  9:45 ` Marc Zyngier
2021-01-22  9:45   ` Marc Zyngier
2021-01-22  9:45   ` Marc Zyngier
2021-01-25 11:25   ` Keqian Zhu
2021-01-25 11:25     ` Keqian Zhu
2021-01-25 11:25     ` Keqian Zhu
2021-01-25 11:31     ` Keqian Zhu
2021-01-25 11:31       ` Keqian Zhu
2021-01-25 11:31       ` Keqian Zhu
2021-03-02 12:19     ` Keqian Zhu
2021-03-02 12:19       ` Keqian Zhu
2021-03-02 12:19       ` Keqian Zhu
2021-03-11  8:43 ` Marc Zyngier
2021-03-11  8:43   ` Marc Zyngier
2021-03-11 14:28   ` Keqian Zhu [this message]
2021-03-11 14:28     ` Keqian Zhu
2021-03-11 14:28     ` Keqian Zhu
2021-03-12  8:52     ` Marc Zyngier
2021-03-12  8:52       ` Marc Zyngier
2021-03-12  9:29       ` Keqian Zhu
2021-03-12  9:29         ` Keqian Zhu
2021-03-12  9:29         ` Keqian Zhu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e2a36913-2ded-71ff-d3ed-f7f8d831447c@huawei.com \
    --to=zhukeqian1@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexios.zavras@intel.com \
    --cc=catalin.marinas@arm.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=james.morse@arm.com \
    --cc=jiangkunkun@huawei.com \
    --cc=joro@8bytes.org \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tglx@linutronix.de \
    --cc=wanghaibin.wang@huawei.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.