From: Gavin Shan <gshan@redhat.com> To: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v3 02/21] KVM: arm64: Add stand-alone page-table walker infrastructure Date: Thu, 3 Sep 2020 11:11:37 +1000 [thread overview] Message-ID: <eedaf062-703c-6782-37f0-57b1b05e1d93@redhat.com> (raw) In-Reply-To: <20200902110233.GE5567@willie-the-truck> Hi Will, On 9/2/20 9:02 PM, Will Deacon wrote: > On Wed, Sep 02, 2020 at 04:31:32PM +1000, Gavin Shan wrote: >> On 8/25/20 7:39 PM, Will Deacon wrote: >>> The KVM page-table code is intricately tied into the kernel page-table >>> code and re-uses the pte/pmd/pud/p4d/pgd macros directly in an attempt >>> to reduce code duplication. Unfortunately, the reality is that there is >>> an awful lot of code required to make this work, and at the end of the >>> day you're limited to creating page-tables with the same configuration >>> as the host kernel. Furthermore, lifting the page-table code to run >>> directly at EL2 on a non-VHE system (as we plan to to do in future >>> patches) is practically impossible due to the number of dependencies it >>> has on the core kernel. >>> >>> Introduce a framework for walking Armv8 page-tables configured >>> independently from the host kernel. >>> >>> Cc: Marc Zyngier <maz@kernel.org> >>> Cc: Quentin Perret <qperret@google.com> >>> Signed-off-by: Will Deacon <will@kernel.org> >>> --- >>> arch/arm64/include/asm/kvm_pgtable.h | 101 ++++++++++ >>> arch/arm64/kvm/hyp/Makefile | 2 +- >>> arch/arm64/kvm/hyp/pgtable.c | 290 +++++++++++++++++++++++++++ >>> 3 files changed, 392 insertions(+), 1 deletion(-) >>> create mode 100644 arch/arm64/include/asm/kvm_pgtable.h >>> create mode 100644 arch/arm64/kvm/hyp/pgtable.c > > [...] > >>> +struct kvm_pgtable_walk_data { >>> + struct kvm_pgtable *pgt; >>> + struct kvm_pgtable_walker *walker; >>> + >>> + u64 addr; >>> + u64 end; >>> +}; >>> + >> >> Some of the following function might be worthy to be inlined, considering >> their complexity :) > > I'll leave that for the compiler to figure out :) > Ok :) >>> +static u32 kvm_pgd_pages(u32 ia_bits, u32 start_level) >>> +{ >>> + struct kvm_pgtable pgt = { >>> + .ia_bits = ia_bits, >>> + .start_level = start_level, >>> + }; >>> + >>> + return __kvm_pgd_page_idx(&pgt, -1ULL) + 1; >>> +} >>> + >> >> It seems @pgt.start_level is assigned with wrong value here. >> For example, @start_level is 2 when @ia_bits and PAGE_SIZE >> are 40 and 64KB separately. In this case, __kvm_pgd_page_idx() >> always return zero. However, the extra page covers up the >> issue. I think something like below might be needed: >> >> struct kvm_pgtable pgt = { >> .ia_bits = ia_bits, >> .start_level = KVM_PGTABLE_MAX_LEVELS - start_level + 1, >> }; > > Hmm, we're pulling the start_level right out of the vtcr, so I don't see > how it can be wrong. In your example, a start_level of 2 seems correct to > me, as we'll translate 13 bits there, then 13 bits at level 3 which covers > the 24 bits you need (with a 16-bit offset within the page). > > Your suggestion would give us a start_level of 1, which has a redundant > level of translation. Maybe you're looking at the levels upside-down? The > top level is level 0 and each time you walk to a new level, that number > increases. > > But perhaps I'm missing something. Please could you elaborate if you think > there's a problem here? > Thanks for the explanation. I think I was understanding the code in wrong way. In this particular path, __kvm_pgd_page_idx() is used to calculate how many subordinate pages needed to hold PGDs. If I'm correct, there are 16 pages for PGDs to the maximal degree. So current implementation looks correct to me. There is another question, which might not relevant. I added some logs around and hopefully my calculation is making sense. I have following configuration (values) in my experiment. I'm including the kernel log to make information complete: [ 5089.107147] kvm_arch_init_vm: kvm@0xfffffe0028460000, type=0x0 [ 5089.112973] kvm_arm_setup_stage2: kvm@0xfffffe0028460000, type=0x0 [ 5089.119157] kvm_ipa_limit=0x2c, phys_shift=0x28 [ 5089.123936] kvm->arch.vtcr=0x00000000802c7558 [ 5089.128552] kvm_init_stage2_mmu: kvm@0xfffffe0028460000 [ 5089.133765] kvm_pgtable_stage2_init: kvm@0xfffffe0028460000, ia_bits=0x28,start_level=0x2 PAGE_SIZE: 64KB @kvm->arch.vtcr: 0x00000000_802c7558 @ipa_bits: 40 @start_level: 2 #define KVM_PGTABLE_MAX_LEVELS 4U static u64 kvm_granule_shift(u32 level) { return (KVM_PGTABLE_MAX_LEVELS - level) * (PAGE_SHIFT - 3) + 3; } static u32 __kvm_pgd_page_idx(struct kvm_pgtable *pgt, u64 addr) { u64 shift = kvm_granule_shift(pgt->start_level - 1); /* May underflow */ u64 mask = BIT(pgt->ia_bits) - 1; return (addr & mask) >> shift; // shift = kvm_granule_shift(2 - 1) = ((3 * 13) + 3) = 42 // mask = ((1UL << 40) - 1) // return (0x000000ff_ffffffff >> 42) = 0 // // QUESTION: Since we have 40-bits @ipa_bits, why we need shift 42-bits here. } I was also thinking about the following case, which is making sense to me. Note I didn't add logs to debug for this case. PAGE_SIZE: 4KB @ipa_bits: 40 @start_level: 1 static u32 __kvm_pgd_page_idx(struct kvm_pgtable *pgt, u64 addr) { u64 shift = kvm_granule_shift(pgt->start_level - 1); /* May underflow */ u64 mask = BIT(pgt->ia_bits) - 1; return (addr & mask) >> shift; // shift = kvm_granule_shift(1 - 1) = ((4 * 9) + 3) = 39 // mask = ((1UL << 40) - 1) // return (0x000000ff_ffffffff >> 39) = 1 } >>> +static int _kvm_pgtable_walk(struct kvm_pgtable_walk_data *data) >>> +{ >>> + u32 idx; >>> + int ret = 0; >>> + struct kvm_pgtable *pgt = data->pgt; >>> + u64 limit = BIT(pgt->ia_bits); >>> + >>> + if (data->addr > limit || data->end > limit) >>> + return -ERANGE; >>> + >>> + if (!pgt->pgd) >>> + return -EINVAL; >>> + >>> + for (idx = kvm_pgd_page_idx(data); data->addr < data->end; ++idx) { >>> + kvm_pte_t *ptep = &pgt->pgd[idx * PTRS_PER_PTE]; >>> + >>> + ret = __kvm_pgtable_walk(data, ptep, pgt->start_level); >>> + if (ret) >>> + break; >>> + } >>> + >>> + return ret; >>> +} >>> + >> >> I guess we need bail on the following condition: >> >> if (data->addr >= limit || data->end >= limit) >> return -ERANGE; > > What's wrong with the existing check? In particular, I think we _want_ > to support data->end == limit (it's exclusive). If data->addr == limit, > then we'll have a size of zero and the loop won't run. > I was thinking @limit is exclusive, so we need bail when hitting the ceiling. The @limit was figured out from @ia_bits. For example, it's 0x00000100_00000000 when @ia_bits is 40-bits, and it's invalid adress to the guest, but I'm still wrong in this case :) Thanks, Gavin _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
WARNING: multiple messages have this Message-ID (diff)
From: Gavin Shan <gshan@redhat.com> To: Will Deacon <will@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com>, Marc Zyngier <maz@kernel.org>, Quentin Perret <qperret@google.com>, James Morse <james.morse@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v3 02/21] KVM: arm64: Add stand-alone page-table walker infrastructure Date: Thu, 3 Sep 2020 11:11:37 +1000 [thread overview] Message-ID: <eedaf062-703c-6782-37f0-57b1b05e1d93@redhat.com> (raw) In-Reply-To: <20200902110233.GE5567@willie-the-truck> Hi Will, On 9/2/20 9:02 PM, Will Deacon wrote: > On Wed, Sep 02, 2020 at 04:31:32PM +1000, Gavin Shan wrote: >> On 8/25/20 7:39 PM, Will Deacon wrote: >>> The KVM page-table code is intricately tied into the kernel page-table >>> code and re-uses the pte/pmd/pud/p4d/pgd macros directly in an attempt >>> to reduce code duplication. Unfortunately, the reality is that there is >>> an awful lot of code required to make this work, and at the end of the >>> day you're limited to creating page-tables with the same configuration >>> as the host kernel. Furthermore, lifting the page-table code to run >>> directly at EL2 on a non-VHE system (as we plan to to do in future >>> patches) is practically impossible due to the number of dependencies it >>> has on the core kernel. >>> >>> Introduce a framework for walking Armv8 page-tables configured >>> independently from the host kernel. >>> >>> Cc: Marc Zyngier <maz@kernel.org> >>> Cc: Quentin Perret <qperret@google.com> >>> Signed-off-by: Will Deacon <will@kernel.org> >>> --- >>> arch/arm64/include/asm/kvm_pgtable.h | 101 ++++++++++ >>> arch/arm64/kvm/hyp/Makefile | 2 +- >>> arch/arm64/kvm/hyp/pgtable.c | 290 +++++++++++++++++++++++++++ >>> 3 files changed, 392 insertions(+), 1 deletion(-) >>> create mode 100644 arch/arm64/include/asm/kvm_pgtable.h >>> create mode 100644 arch/arm64/kvm/hyp/pgtable.c > > [...] > >>> +struct kvm_pgtable_walk_data { >>> + struct kvm_pgtable *pgt; >>> + struct kvm_pgtable_walker *walker; >>> + >>> + u64 addr; >>> + u64 end; >>> +}; >>> + >> >> Some of the following function might be worthy to be inlined, considering >> their complexity :) > > I'll leave that for the compiler to figure out :) > Ok :) >>> +static u32 kvm_pgd_pages(u32 ia_bits, u32 start_level) >>> +{ >>> + struct kvm_pgtable pgt = { >>> + .ia_bits = ia_bits, >>> + .start_level = start_level, >>> + }; >>> + >>> + return __kvm_pgd_page_idx(&pgt, -1ULL) + 1; >>> +} >>> + >> >> It seems @pgt.start_level is assigned with wrong value here. >> For example, @start_level is 2 when @ia_bits and PAGE_SIZE >> are 40 and 64KB separately. In this case, __kvm_pgd_page_idx() >> always return zero. However, the extra page covers up the >> issue. I think something like below might be needed: >> >> struct kvm_pgtable pgt = { >> .ia_bits = ia_bits, >> .start_level = KVM_PGTABLE_MAX_LEVELS - start_level + 1, >> }; > > Hmm, we're pulling the start_level right out of the vtcr, so I don't see > how it can be wrong. In your example, a start_level of 2 seems correct to > me, as we'll translate 13 bits there, then 13 bits at level 3 which covers > the 24 bits you need (with a 16-bit offset within the page). > > Your suggestion would give us a start_level of 1, which has a redundant > level of translation. Maybe you're looking at the levels upside-down? The > top level is level 0 and each time you walk to a new level, that number > increases. > > But perhaps I'm missing something. Please could you elaborate if you think > there's a problem here? > Thanks for the explanation. I think I was understanding the code in wrong way. In this particular path, __kvm_pgd_page_idx() is used to calculate how many subordinate pages needed to hold PGDs. If I'm correct, there are 16 pages for PGDs to the maximal degree. So current implementation looks correct to me. There is another question, which might not relevant. I added some logs around and hopefully my calculation is making sense. I have following configuration (values) in my experiment. I'm including the kernel log to make information complete: [ 5089.107147] kvm_arch_init_vm: kvm@0xfffffe0028460000, type=0x0 [ 5089.112973] kvm_arm_setup_stage2: kvm@0xfffffe0028460000, type=0x0 [ 5089.119157] kvm_ipa_limit=0x2c, phys_shift=0x28 [ 5089.123936] kvm->arch.vtcr=0x00000000802c7558 [ 5089.128552] kvm_init_stage2_mmu: kvm@0xfffffe0028460000 [ 5089.133765] kvm_pgtable_stage2_init: kvm@0xfffffe0028460000, ia_bits=0x28,start_level=0x2 PAGE_SIZE: 64KB @kvm->arch.vtcr: 0x00000000_802c7558 @ipa_bits: 40 @start_level: 2 #define KVM_PGTABLE_MAX_LEVELS 4U static u64 kvm_granule_shift(u32 level) { return (KVM_PGTABLE_MAX_LEVELS - level) * (PAGE_SHIFT - 3) + 3; } static u32 __kvm_pgd_page_idx(struct kvm_pgtable *pgt, u64 addr) { u64 shift = kvm_granule_shift(pgt->start_level - 1); /* May underflow */ u64 mask = BIT(pgt->ia_bits) - 1; return (addr & mask) >> shift; // shift = kvm_granule_shift(2 - 1) = ((3 * 13) + 3) = 42 // mask = ((1UL << 40) - 1) // return (0x000000ff_ffffffff >> 42) = 0 // // QUESTION: Since we have 40-bits @ipa_bits, why we need shift 42-bits here. } I was also thinking about the following case, which is making sense to me. Note I didn't add logs to debug for this case. PAGE_SIZE: 4KB @ipa_bits: 40 @start_level: 1 static u32 __kvm_pgd_page_idx(struct kvm_pgtable *pgt, u64 addr) { u64 shift = kvm_granule_shift(pgt->start_level - 1); /* May underflow */ u64 mask = BIT(pgt->ia_bits) - 1; return (addr & mask) >> shift; // shift = kvm_granule_shift(1 - 1) = ((4 * 9) + 3) = 39 // mask = ((1UL << 40) - 1) // return (0x000000ff_ffffffff >> 39) = 1 } >>> +static int _kvm_pgtable_walk(struct kvm_pgtable_walk_data *data) >>> +{ >>> + u32 idx; >>> + int ret = 0; >>> + struct kvm_pgtable *pgt = data->pgt; >>> + u64 limit = BIT(pgt->ia_bits); >>> + >>> + if (data->addr > limit || data->end > limit) >>> + return -ERANGE; >>> + >>> + if (!pgt->pgd) >>> + return -EINVAL; >>> + >>> + for (idx = kvm_pgd_page_idx(data); data->addr < data->end; ++idx) { >>> + kvm_pte_t *ptep = &pgt->pgd[idx * PTRS_PER_PTE]; >>> + >>> + ret = __kvm_pgtable_walk(data, ptep, pgt->start_level); >>> + if (ret) >>> + break; >>> + } >>> + >>> + return ret; >>> +} >>> + >> >> I guess we need bail on the following condition: >> >> if (data->addr >= limit || data->end >= limit) >> return -ERANGE; > > What's wrong with the existing check? In particular, I think we _want_ > to support data->end == limit (it's exclusive). If data->addr == limit, > then we'll have a size of zero and the loop won't run. > I was thinking @limit is exclusive, so we need bail when hitting the ceiling. The @limit was figured out from @ia_bits. For example, it's 0x00000100_00000000 when @ia_bits is 40-bits, and it's invalid adress to the guest, but I'm still wrong in this case :) Thanks, Gavin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-09-03 1:11 UTC|newest] Thread overview: 172+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-08-25 9:39 [PATCH v3 00/21] KVM: arm64: Rewrite page-table code and fault handling Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-08-25 9:39 ` [PATCH v3 01/21] KVM: arm64: Remove kvm_mmu_free_memory_caches() Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-08-25 9:39 ` [PATCH v3 02/21] KVM: arm64: Add stand-alone page-table walker infrastructure Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-08-27 16:27 ` Alexandru Elisei 2020-08-27 16:27 ` Alexandru Elisei 2020-08-28 15:43 ` Alexandru Elisei 2020-08-28 15:43 ` Alexandru Elisei 2020-09-02 10:36 ` Will Deacon 2020-09-02 10:36 ` Will Deacon 2020-08-28 15:51 ` Alexandru Elisei 2020-08-28 15:51 ` Alexandru Elisei 2020-09-02 10:49 ` Will Deacon 2020-09-02 10:49 ` Will Deacon 2020-09-02 6:31 ` Gavin Shan 2020-09-02 6:31 ` Gavin Shan 2020-09-02 11:02 ` Will Deacon 2020-09-02 11:02 ` Will Deacon 2020-09-03 1:11 ` Gavin Shan [this message] 2020-09-03 1:11 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 03/21] KVM: arm64: Add support for creating kernel-agnostic stage-1 page tables Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-08-28 15:35 ` Alexandru Elisei 2020-08-28 15:35 ` Alexandru Elisei 2020-09-02 10:06 ` Will Deacon 2020-09-02 10:06 ` Will Deacon 2020-08-25 9:39 ` [PATCH v3 04/21] KVM: arm64: Use generic allocator for hyp stage-1 page-tables Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-08-28 16:32 ` Alexandru Elisei 2020-08-28 16:32 ` Alexandru Elisei 2020-09-02 11:35 ` Will Deacon 2020-09-02 11:35 ` Will Deacon 2020-09-02 14:48 ` Alexandru Elisei 2020-09-02 14:48 ` Alexandru Elisei 2020-08-25 9:39 ` [PATCH v3 05/21] KVM: arm64: Add support for creating kernel-agnostic stage-2 page tables Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-02 6:40 ` Gavin Shan 2020-09-02 6:40 ` Gavin Shan 2020-09-02 11:30 ` Will Deacon 2020-09-02 11:30 ` Will Deacon 2020-08-25 9:39 ` [PATCH v3 06/21] KVM: arm64: Add support for stage-2 map()/unmap() in generic page-table Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-01 16:24 ` Alexandru Elisei 2020-09-01 16:24 ` Alexandru Elisei 2020-09-02 11:46 ` Will Deacon 2020-09-02 11:46 ` Will Deacon 2020-09-03 2:57 ` Gavin Shan 2020-09-03 2:57 ` Gavin Shan 2020-09-03 5:27 ` Gavin Shan 2020-09-03 5:27 ` Gavin Shan 2020-09-03 11:18 ` Gavin Shan 2020-09-03 11:18 ` Gavin Shan 2020-09-03 12:30 ` Will Deacon 2020-09-03 12:30 ` Will Deacon 2020-09-03 16:15 ` Will Deacon 2020-09-03 16:15 ` Will Deacon 2020-09-04 0:47 ` Gavin Shan 2020-09-04 0:47 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 07/21] KVM: arm64: Convert kvm_phys_addr_ioremap() to generic page-table API Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-01 17:08 ` Alexandru Elisei 2020-09-01 17:08 ` Alexandru Elisei 2020-09-02 11:48 ` Will Deacon 2020-09-02 11:48 ` Will Deacon 2020-09-03 3:57 ` Gavin Shan 2020-09-03 3:57 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 08/21] KVM: arm64: Convert kvm_set_spte_hva() " Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-02 15:37 ` Alexandru Elisei 2020-09-02 15:37 ` Alexandru Elisei 2020-09-03 16:37 ` Will Deacon 2020-09-03 16:37 ` Will Deacon 2020-09-03 4:13 ` Gavin Shan 2020-09-03 4:13 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 09/21] KVM: arm64: Convert unmap_stage2_range() " Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-02 16:23 ` Alexandru Elisei 2020-09-02 16:23 ` Alexandru Elisei 2020-09-02 18:44 ` Alexandru Elisei 2020-09-02 18:44 ` Alexandru Elisei 2020-09-03 17:57 ` Will Deacon 2020-09-03 17:57 ` Will Deacon 2020-09-08 13:07 ` Alexandru Elisei 2020-09-08 13:07 ` Alexandru Elisei 2020-09-09 10:57 ` Alexandru Elisei 2020-09-09 10:57 ` Alexandru Elisei 2020-09-03 4:19 ` Gavin Shan 2020-09-03 4:19 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 10/21] KVM: arm64: Add support for stage-2 page-aging in generic page-table Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:33 ` Gavin Shan 2020-09-03 4:33 ` Gavin Shan 2020-09-03 16:48 ` Will Deacon 2020-09-03 16:48 ` Will Deacon 2020-09-04 1:01 ` Gavin Shan 2020-09-04 1:01 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 11/21] KVM: arm64: Convert page-aging and access faults to generic page-table API Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:37 ` Gavin Shan 2020-09-03 4:37 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 12/21] KVM: arm64: Add support for stage-2 write-protect in generic page-table Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:47 ` Gavin Shan 2020-09-03 4:47 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 13/21] KVM: arm64: Convert write-protect operation to generic page-table API Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:48 ` Gavin Shan 2020-09-03 4:48 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 14/21] KVM: arm64: Add support for stage-2 cache flushing in generic page-table Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:51 ` Gavin Shan 2020-09-03 4:51 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 15/21] KVM: arm64: Convert memslot cache-flushing code to generic page-table API Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:52 ` Gavin Shan 2020-09-03 4:52 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 16/21] KVM: arm64: Add support for relaxing stage-2 perms in generic page-table code Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 4:55 ` Gavin Shan 2020-09-03 4:55 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 17/21] KVM: arm64: Convert user_mem_abort() to generic page-table API Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 6:05 ` Gavin Shan 2020-09-03 6:05 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 18/21] KVM: arm64: Check the pgt instead of the pgd when modifying page-table Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 5:00 ` Gavin Shan 2020-09-03 5:00 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 19/21] KVM: arm64: Remove unused page-table code Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 6:02 ` Gavin Shan 2020-09-03 6:02 ` Gavin Shan 2020-08-25 9:39 ` [PATCH v3 20/21] KVM: arm64: Remove unused 'pgd' field from 'struct kvm_s2_mmu' Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 5:07 ` Gavin Shan 2020-09-03 5:07 ` Gavin Shan 2020-09-03 16:50 ` Will Deacon 2020-09-03 16:50 ` Will Deacon 2020-09-04 0:59 ` Gavin Shan 2020-09-04 0:59 ` Gavin Shan 2020-09-04 10:02 ` Marc Zyngier 2020-09-04 10:02 ` Marc Zyngier 2020-08-25 9:39 ` [PATCH v3 21/21] KVM: arm64: Don't constrain maximum IPA size based on host configuration Will Deacon 2020-08-25 9:39 ` Will Deacon 2020-09-03 5:09 ` Gavin Shan 2020-09-03 5:09 ` Gavin Shan 2020-08-27 16:26 ` [PATCH v3 00/21] KVM: arm64: Rewrite page-table code and fault handling Alexandru Elisei 2020-08-27 16:26 ` Alexandru Elisei 2020-09-01 16:15 ` Will Deacon 2020-09-01 16:15 ` Will Deacon 2020-09-03 7:34 ` Gavin Shan 2020-09-03 7:34 ` Gavin Shan 2020-09-03 11:13 ` Gavin Shan 2020-09-03 11:13 ` Gavin Shan 2020-09-03 11:48 ` Gavin Shan 2020-09-03 11:48 ` Gavin Shan 2020-09-03 12:16 ` Will Deacon 2020-09-03 12:16 ` Will Deacon 2020-09-04 0:51 ` Gavin Shan 2020-09-04 0:51 ` Gavin Shan 2020-09-04 10:07 ` Marc Zyngier 2020-09-04 10:07 ` Marc Zyngier 2020-09-05 3:56 ` Gavin Shan 2020-09-05 3:56 ` Gavin Shan 2020-09-05 9:33 ` Marc Zyngier 2020-09-05 9:33 ` Marc Zyngier 2020-09-07 9:27 ` Will Deacon 2020-09-07 9:27 ` Will Deacon 2020-09-03 18:52 ` Will Deacon 2020-09-03 18:52 ` Will Deacon
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=eedaf062-703c-6782-37f0-57b1b05e1d93@redhat.com \ --to=gshan@redhat.com \ --cc=catalin.marinas@arm.com \ --cc=kernel-team@android.com \ --cc=kvmarm@lists.cs.columbia.edu \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=maz@kernel.org \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.