All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03  2:42 ` Jia He
  0 siblings, 0 replies; 36+ messages in thread
From: Jia He @ 2021-03-03  2:42 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel, Jia He

If the start addr is not aligned with the granule size of that level.
loop step size should be adjusted to boundary instead of simple
kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
the chance to be walked through.
E.g. Assume the unmap range [data->addr, data->end] is
[0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Without this fix, userspace "segment fault" error can be easily
triggered by running simple gVisor runsc cases on an Ampere Altra
server:
    docker run --runtime=runsc -it --rm  ubuntu /bin/bash

In container:
    for i in `seq 1 100`;do ls;done

Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index bdf8e55ed308..4d99d07c610c 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
 		goto out;
 
 	if (!table) {
+		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
 		data->addr += kvm_granule_size(level);
 		goto out;
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03  2:42 ` Jia He
  0 siblings, 0 replies; 36+ messages in thread
From: Jia He @ 2021-03-03  2:42 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, Jia He, Will Deacon

If the start addr is not aligned with the granule size of that level.
loop step size should be adjusted to boundary instead of simple
kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
the chance to be walked through.
E.g. Assume the unmap range [data->addr, data->end] is
[0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Without this fix, userspace "segment fault" error can be easily
triggered by running simple gVisor runsc cases on an Ampere Altra
server:
    docker run --runtime=runsc -it --rm  ubuntu /bin/bash

In container:
    for i in `seq 1 100`;do ls;done

Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index bdf8e55ed308..4d99d07c610c 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
 		goto out;
 
 	if (!table) {
+		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
 		data->addr += kvm_granule_size(level);
 		goto out;
 	}
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03  2:42 ` Jia He
  0 siblings, 0 replies; 36+ messages in thread
From: Jia He @ 2021-03-03  2:42 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Catalin Marinas,
	Will Deacon, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel, Jia He

If the start addr is not aligned with the granule size of that level.
loop step size should be adjusted to boundary instead of simple
kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
the chance to be walked through.
E.g. Assume the unmap range [data->addr, data->end] is
[0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Without this fix, userspace "segment fault" error can be easily
triggered by running simple gVisor runsc cases on an Ampere Altra
server:
    docker run --runtime=runsc -it --rm  ubuntu /bin/bash

In container:
    for i in `seq 1 100`;do ls;done

Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index bdf8e55ed308..4d99d07c610c 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
 		goto out;
 
 	if (!table) {
+		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
 		data->addr += kvm_granule_size(level);
 		goto out;
 	}
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03  2:42 ` Jia He
  (?)
@ 2021-03-03  9:54   ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-03  9:54 UTC (permalink / raw)
  To: Jia He
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Will Deacon, Gavin Shan, Yanan Wang,
	Quentin Perret, linux-arm-kernel, linux-kernel

Hi Jia,

On Wed, 03 Mar 2021 02:42:25 +0000,
Jia He <justin.he@arm.com> wrote:
> 
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.

When does this occur? Upgrade from page mappings to block? Swap out?

> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Let me see if I understand this. Assuming 4k pages, the region
described above spans *two* 2M entries:

(a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
(b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000

(a) has no valid mapping, but (b) does. Because we fail to correctly
align on a block boundary when skipping (a), we also skip (b), which
is then left mapped.

Did I get it right? If so, yes, this is... annoying.

Understanding the circumstances this triggers in would be most
interesting. This current code seems to assume that we get ranges
aligned to mapping boundaries, but I seem to remember that the old
code did use the stage2_*_addr_end() helpers to deal with this case.

Will: I don't think things have changed in that respect, right?

> 
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
>     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> 
> In container:
>     for i in `seq 1 100`;do ls;done

The workload on its own isn't that interesting. What I'd like to
understand is what happens on the host during that time.

> 
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
>  		data->addr += kvm_granule_size(level);
>  		goto out;
>  	}

It otherwise looks good to me. Quentin, Will: unless you object to
this, I plan to take it in the next round of fixes with

Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
Cc: stable@vger.kernel.org

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03  9:54   ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-03  9:54 UTC (permalink / raw)
  To: Jia He
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, Will Deacon, kvmarm

Hi Jia,

On Wed, 03 Mar 2021 02:42:25 +0000,
Jia He <justin.he@arm.com> wrote:
> 
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.

When does this occur? Upgrade from page mappings to block? Swap out?

> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Let me see if I understand this. Assuming 4k pages, the region
described above spans *two* 2M entries:

(a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
(b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000

(a) has no valid mapping, but (b) does. Because we fail to correctly
align on a block boundary when skipping (a), we also skip (b), which
is then left mapped.

Did I get it right? If so, yes, this is... annoying.

Understanding the circumstances this triggers in would be most
interesting. This current code seems to assume that we get ranges
aligned to mapping boundaries, but I seem to remember that the old
code did use the stage2_*_addr_end() helpers to deal with this case.

Will: I don't think things have changed in that respect, right?

> 
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
>     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> 
> In container:
>     for i in `seq 1 100`;do ls;done

The workload on its own isn't that interesting. What I'd like to
understand is what happens on the host during that time.

> 
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
>  		data->addr += kvm_granule_size(level);
>  		goto out;
>  	}

It otherwise looks good to me. Quentin, Will: unless you object to
this, I plan to take it in the next round of fixes with

Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
Cc: stable@vger.kernel.org

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03  9:54   ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-03  9:54 UTC (permalink / raw)
  To: Jia He
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Will Deacon, Gavin Shan, Yanan Wang,
	Quentin Perret, linux-arm-kernel, linux-kernel

Hi Jia,

On Wed, 03 Mar 2021 02:42:25 +0000,
Jia He <justin.he@arm.com> wrote:
> 
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.

When does this occur? Upgrade from page mappings to block? Swap out?

> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Let me see if I understand this. Assuming 4k pages, the region
described above spans *two* 2M entries:

(a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
(b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000

(a) has no valid mapping, but (b) does. Because we fail to correctly
align on a block boundary when skipping (a), we also skip (b), which
is then left mapped.

Did I get it right? If so, yes, this is... annoying.

Understanding the circumstances this triggers in would be most
interesting. This current code seems to assume that we get ranges
aligned to mapping boundaries, but I seem to remember that the old
code did use the stage2_*_addr_end() helpers to deal with this case.

Will: I don't think things have changed in that respect, right?

> 
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
>     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> 
> In container:
>     for i in `seq 1 100`;do ls;done

The workload on its own isn't that interesting. What I'd like to
understand is what happens on the host during that time.

> 
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
>  		data->addr += kvm_granule_size(level);
>  		goto out;
>  	}

It otherwise looks good to me. Quentin, Will: unless you object to
this, I plan to take it in the next round of fixes with

Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
Cc: stable@vger.kernel.org

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03  9:54   ` Marc Zyngier
  (?)
@ 2021-03-03 11:08     ` Quentin Perret
  -1 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-03-03 11:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Will Deacon, Gavin Shan, Yanan Wang,
	linux-arm-kernel, linux-kernel

On Wednesday 03 Mar 2021 at 09:54:25 (+0000), Marc Zyngier wrote:
> Hi Jia,
> 
> On Wed, 03 Mar 2021 02:42:25 +0000,
> Jia He <justin.he@arm.com> wrote:
> > 
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> 
> When does this occur? Upgrade from page mappings to block? Swap out?
> 
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Let me see if I understand this. Assuming 4k pages, the region
> described above spans *two* 2M entries:
> 
> (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> 
> (a) has no valid mapping, but (b) does. Because we fail to correctly
> align on a block boundary when skipping (a), we also skip (b), which
> is then left mapped.
> 
> Did I get it right? If so, yes, this is... annoying.
> 
> Understanding the circumstances this triggers in would be most
> interesting. This current code seems to assume that we get ranges
> aligned to mapping boundaries, but I seem to remember that the old
> code did use the stage2_*_addr_end() helpers to deal with this case.
> 
> Will: I don't think things have changed in that respect, right?

Indeed we should still use stage2_*_addr_end(), especially in the unmap
path that is mentioned here, so it would be helpful to have a little bit
more context.

> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> 
> The workload on its own isn't that interesting. What I'd like to
> understand is what happens on the host during that time.
> 
> > 
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> >  		goto out;
> >  
> >  	if (!table) {
> > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >  		data->addr += kvm_granule_size(level);
> >  		goto out;
> >  	}
> 
> It otherwise looks good to me. Quentin, Will: unless you object to
> this, I plan to take it in the next round of fixes with

Though I'm still unsure how we hit that today, the change makes sense on
its own I think, so no objection from me.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 11:08     ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-03-03 11:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, Catalin Marinas, linux-kernel, linux-arm-kernel,
	Will Deacon, kvmarm

On Wednesday 03 Mar 2021 at 09:54:25 (+0000), Marc Zyngier wrote:
> Hi Jia,
> 
> On Wed, 03 Mar 2021 02:42:25 +0000,
> Jia He <justin.he@arm.com> wrote:
> > 
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> 
> When does this occur? Upgrade from page mappings to block? Swap out?
> 
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Let me see if I understand this. Assuming 4k pages, the region
> described above spans *two* 2M entries:
> 
> (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> 
> (a) has no valid mapping, but (b) does. Because we fail to correctly
> align on a block boundary when skipping (a), we also skip (b), which
> is then left mapped.
> 
> Did I get it right? If so, yes, this is... annoying.
> 
> Understanding the circumstances this triggers in would be most
> interesting. This current code seems to assume that we get ranges
> aligned to mapping boundaries, but I seem to remember that the old
> code did use the stage2_*_addr_end() helpers to deal with this case.
> 
> Will: I don't think things have changed in that respect, right?

Indeed we should still use stage2_*_addr_end(), especially in the unmap
path that is mentioned here, so it would be helpful to have a little bit
more context.

> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> 
> The workload on its own isn't that interesting. What I'd like to
> understand is what happens on the host during that time.
> 
> > 
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> >  		goto out;
> >  
> >  	if (!table) {
> > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >  		data->addr += kvm_granule_size(level);
> >  		goto out;
> >  	}
> 
> It otherwise looks good to me. Quentin, Will: unless you object to
> this, I plan to take it in the next round of fixes with

Though I'm still unsure how we hit that today, the change makes sense on
its own I think, so no objection from me.

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 11:08     ` Quentin Perret
  0 siblings, 0 replies; 36+ messages in thread
From: Quentin Perret @ 2021-03-03 11:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Will Deacon, Gavin Shan, Yanan Wang,
	linux-arm-kernel, linux-kernel

On Wednesday 03 Mar 2021 at 09:54:25 (+0000), Marc Zyngier wrote:
> Hi Jia,
> 
> On Wed, 03 Mar 2021 02:42:25 +0000,
> Jia He <justin.he@arm.com> wrote:
> > 
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> 
> When does this occur? Upgrade from page mappings to block? Swap out?
> 
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Let me see if I understand this. Assuming 4k pages, the region
> described above spans *two* 2M entries:
> 
> (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> 
> (a) has no valid mapping, but (b) does. Because we fail to correctly
> align on a block boundary when skipping (a), we also skip (b), which
> is then left mapped.
> 
> Did I get it right? If so, yes, this is... annoying.
> 
> Understanding the circumstances this triggers in would be most
> interesting. This current code seems to assume that we get ranges
> aligned to mapping boundaries, but I seem to remember that the old
> code did use the stage2_*_addr_end() helpers to deal with this case.
> 
> Will: I don't think things have changed in that respect, right?

Indeed we should still use stage2_*_addr_end(), especially in the unmap
path that is mentioned here, so it would be helpful to have a little bit
more context.

> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> 
> The workload on its own isn't that interesting. What I'd like to
> understand is what happens on the host during that time.
> 
> > 
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> >  		goto out;
> >  
> >  	if (!table) {
> > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >  		data->addr += kvm_granule_size(level);
> >  		goto out;
> >  	}
> 
> It otherwise looks good to me. Quentin, Will: unless you object to
> this, I plan to take it in the next round of fixes with

Though I'm still unsure how we hit that today, the change makes sense on
its own I think, so no objection from me.

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03  2:42 ` Jia He
  (?)
@ 2021-03-03 11:29   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 11:29 UTC (permalink / raw)
  To: Jia He
  Cc: Marc Zyngier, kvmarm, James Morse, Julien Thierry,
	Suzuki K Poulose, Catalin Marinas, Gavin Shan, Yanan Wang,
	Quentin Perret, linux-arm-kernel, linux-kernel

On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
>     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> 
> In container:
>     for i in `seq 1 100`;do ls;done
> 
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
>  		data->addr += kvm_granule_size(level);

Can you replace both of these lines with:

	data->addr = ALIGN(data->addr, kvm_granule_size(level));

instead?

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 11:29   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 11:29 UTC (permalink / raw)
  To: Jia He
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, Marc Zyngier, kvmarm

On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
>     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> 
> In container:
>     for i in `seq 1 100`;do ls;done
> 
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
>  		data->addr += kvm_granule_size(level);

Can you replace both of these lines with:

	data->addr = ALIGN(data->addr, kvm_granule_size(level));

instead?

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 11:29   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 11:29 UTC (permalink / raw)
  To: Jia He
  Cc: Marc Zyngier, kvmarm, James Morse, Julien Thierry,
	Suzuki K Poulose, Catalin Marinas, Gavin Shan, Yanan Wang,
	Quentin Perret, linux-arm-kernel, linux-kernel

On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
>     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> 
> In container:
>     for i in `seq 1 100`;do ls;done
> 
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
>  		data->addr += kvm_granule_size(level);

Can you replace both of these lines with:

	data->addr = ALIGN(data->addr, kvm_granule_size(level));

instead?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03  9:54   ` Marc Zyngier
  (?)
@ 2021-03-03 11:49     ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 11:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Wed, Mar 03, 2021 at 09:54:25AM +0000, Marc Zyngier wrote:
> Hi Jia,
> 
> On Wed, 03 Mar 2021 02:42:25 +0000,
> Jia He <justin.he@arm.com> wrote:
> > 
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> 
> When does this occur? Upgrade from page mappings to block? Swap out?
> 
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Let me see if I understand this. Assuming 4k pages, the region
> described above spans *two* 2M entries:
> 
> (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> 
> (a) has no valid mapping, but (b) does. Because we fail to correctly
> align on a block boundary when skipping (a), we also skip (b), which
> is then left mapped.
> 
> Did I get it right? If so, yes, this is... annoying.
> 
> Understanding the circumstances this triggers in would be most
> interesting. This current code seems to assume that we get ranges
> aligned to mapping boundaries, but I seem to remember that the old
> code did use the stage2_*_addr_end() helpers to deal with this case.
> 
> Will: I don't think things have changed in that respect, right?

We've maintained stage2_pgd_addr_end() for the top-level iterator, but
it looks like we're failing to do the rounding in __kvm_pgtable_visit()
when hitting a leaf entry, so the caller (__kvm_pgtable_walk()) will
terminate early.

I agree that it's annoying, as you'd have expected us to run into this
earlier on.

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 11:49     ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 11:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, Catalin Marinas, linux-kernel, linux-arm-kernel, kvmarm

On Wed, Mar 03, 2021 at 09:54:25AM +0000, Marc Zyngier wrote:
> Hi Jia,
> 
> On Wed, 03 Mar 2021 02:42:25 +0000,
> Jia He <justin.he@arm.com> wrote:
> > 
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> 
> When does this occur? Upgrade from page mappings to block? Swap out?
> 
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Let me see if I understand this. Assuming 4k pages, the region
> described above spans *two* 2M entries:
> 
> (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> 
> (a) has no valid mapping, but (b) does. Because we fail to correctly
> align on a block boundary when skipping (a), we also skip (b), which
> is then left mapped.
> 
> Did I get it right? If so, yes, this is... annoying.
> 
> Understanding the circumstances this triggers in would be most
> interesting. This current code seems to assume that we get ranges
> aligned to mapping boundaries, but I seem to remember that the old
> code did use the stage2_*_addr_end() helpers to deal with this case.
> 
> Will: I don't think things have changed in that respect, right?

We've maintained stage2_pgd_addr_end() for the top-level iterator, but
it looks like we're failing to do the rounding in __kvm_pgtable_visit()
when hitting a leaf entry, so the caller (__kvm_pgtable_walk()) will
terminate early.

I agree that it's annoying, as you'd have expected us to run into this
earlier on.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 11:49     ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 11:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Wed, Mar 03, 2021 at 09:54:25AM +0000, Marc Zyngier wrote:
> Hi Jia,
> 
> On Wed, 03 Mar 2021 02:42:25 +0000,
> Jia He <justin.he@arm.com> wrote:
> > 
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> 
> When does this occur? Upgrade from page mappings to block? Swap out?
> 
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> Let me see if I understand this. Assuming 4k pages, the region
> described above spans *two* 2M entries:
> 
> (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> 
> (a) has no valid mapping, but (b) does. Because we fail to correctly
> align on a block boundary when skipping (a), we also skip (b), which
> is then left mapped.
> 
> Did I get it right? If so, yes, this is... annoying.
> 
> Understanding the circumstances this triggers in would be most
> interesting. This current code seems to assume that we get ranges
> aligned to mapping boundaries, but I seem to remember that the old
> code did use the stage2_*_addr_end() helpers to deal with this case.
> 
> Will: I don't think things have changed in that respect, right?

We've maintained stage2_pgd_addr_end() for the top-level iterator, but
it looks like we're failing to do the rounding in __kvm_pgtable_visit()
when hitting a leaf entry, so the caller (__kvm_pgtable_walk()) will
terminate early.

I agree that it's annoying, as you'd have expected us to run into this
earlier on.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03 11:29   ` Will Deacon
  (?)
@ 2021-03-03 19:07     ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-03 19:07 UTC (permalink / raw)
  To: Will Deacon, Jia He
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Wed, 03 Mar 2021 11:29:34 +0000,
Will Deacon <will@kernel.org> wrote:
> 
> On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> > 
> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> > 
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> >  		goto out;
> >  
> >  	if (!table) {
> > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >  		data->addr += kvm_granule_size(level);
> 
> Can you replace both of these lines with:
> 
> 	data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> instead?

Seems like a good option. I also took the liberty to rewrite the
commit message in an effort to make it a bit clearer.

Jia, please let me know if you are OK with these cosmetic changes.


Thanks,

	M.

From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
From: Jia He <justin.he@arm.com>
Date: Wed, 3 Mar 2021 10:42:25 +0800
Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables

When walking the page tables at a given level, and if the start
address for the range isn't aligned for that level, we propagate
the misalignment on each iteration at that level.

This results in the walker ignoring a number of entries (depending
on the original misalignment) on each subsequent iteration.

Properly aligning the address at the before the next iteration
addresses the issue.

Cc: stable@vger.kernel.org
Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
[maz: rewrite commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
---
 arch/arm64/kvm/hyp/pgtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d536..124cd2f93020 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
 		goto out;
 
 	if (!table) {
-		data->addr += kvm_granule_size(level);
+		data->addr = ALIGN(data->addr, kvm_granule_size(level));
 		goto out;
 	}
 
-- 
2.30.0


-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 19:07     ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-03 19:07 UTC (permalink / raw)
  To: Will Deacon, Jia He
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, kvmarm

On Wed, 03 Mar 2021 11:29:34 +0000,
Will Deacon <will@kernel.org> wrote:
> 
> On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> > 
> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> > 
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> >  		goto out;
> >  
> >  	if (!table) {
> > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >  		data->addr += kvm_granule_size(level);
> 
> Can you replace both of these lines with:
> 
> 	data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> instead?

Seems like a good option. I also took the liberty to rewrite the
commit message in an effort to make it a bit clearer.

Jia, please let me know if you are OK with these cosmetic changes.


Thanks,

	M.

From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
From: Jia He <justin.he@arm.com>
Date: Wed, 3 Mar 2021 10:42:25 +0800
Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables

When walking the page tables at a given level, and if the start
address for the range isn't aligned for that level, we propagate
the misalignment on each iteration at that level.

This results in the walker ignoring a number of entries (depending
on the original misalignment) on each subsequent iteration.

Properly aligning the address at the before the next iteration
addresses the issue.

Cc: stable@vger.kernel.org
Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
[maz: rewrite commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
---
 arch/arm64/kvm/hyp/pgtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d536..124cd2f93020 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
 		goto out;
 
 	if (!table) {
-		data->addr += kvm_granule_size(level);
+		data->addr = ALIGN(data->addr, kvm_granule_size(level));
 		goto out;
 	}
 
-- 
2.30.0


-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 19:07     ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-03 19:07 UTC (permalink / raw)
  To: Will Deacon, Jia He
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Wed, 03 Mar 2021 11:29:34 +0000,
Will Deacon <will@kernel.org> wrote:
> 
> On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> > 
> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> > 
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> >  		goto out;
> >  
> >  	if (!table) {
> > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >  		data->addr += kvm_granule_size(level);
> 
> Can you replace both of these lines with:
> 
> 	data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> instead?

Seems like a good option. I also took the liberty to rewrite the
commit message in an effort to make it a bit clearer.

Jia, please let me know if you are OK with these cosmetic changes.


Thanks,

	M.

From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
From: Jia He <justin.he@arm.com>
Date: Wed, 3 Mar 2021 10:42:25 +0800
Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables

When walking the page tables at a given level, and if the start
address for the range isn't aligned for that level, we propagate
the misalignment on each iteration at that level.

This results in the walker ignoring a number of entries (depending
on the original misalignment) on each subsequent iteration.

Properly aligning the address at the before the next iteration
addresses the issue.

Cc: stable@vger.kernel.org
Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
[maz: rewrite commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
---
 arch/arm64/kvm/hyp/pgtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d536..124cd2f93020 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
 		goto out;
 
 	if (!table) {
-		data->addr += kvm_granule_size(level);
+		data->addr = ALIGN(data->addr, kvm_granule_size(level));
 		goto out;
 	}
 
-- 
2.30.0


-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03 19:07     ` Marc Zyngier
  (?)
@ 2021-03-03 21:13       ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 21:13 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> From: Jia He <justin.he@arm.com>
> Date: Wed, 3 Mar 2021 10:42:25 +0800
> Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> 
> When walking the page tables at a given level, and if the start
> address for the range isn't aligned for that level, we propagate
> the misalignment on each iteration at that level.
> 
> This results in the walker ignoring a number of entries (depending
> on the original misalignment) on each subsequent iteration.
> 
> Properly aligning the address at the before the next iteration

"at the before the next" ???

> addresses the issue.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
> [maz: rewrite commit message]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 4d177ce1d536..124cd2f93020 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> -		data->addr += kvm_granule_size(level);
> +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
>  		goto out;
>  	}

If Jia is happy with it, please feel free to add:

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 21:13       ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 21:13 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, Catalin Marinas, linux-kernel, linux-arm-kernel, kvmarm

On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> From: Jia He <justin.he@arm.com>
> Date: Wed, 3 Mar 2021 10:42:25 +0800
> Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> 
> When walking the page tables at a given level, and if the start
> address for the range isn't aligned for that level, we propagate
> the misalignment on each iteration at that level.
> 
> This results in the walker ignoring a number of entries (depending
> on the original misalignment) on each subsequent iteration.
> 
> Properly aligning the address at the before the next iteration

"at the before the next" ???

> addresses the issue.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
> [maz: rewrite commit message]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 4d177ce1d536..124cd2f93020 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> -		data->addr += kvm_granule_size(level);
> +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
>  		goto out;
>  	}

If Jia is happy with it, please feel free to add:

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-03 21:13       ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-03 21:13 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Jia He, kvmarm, James Morse, Julien Thierry, Suzuki K Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> From: Jia He <justin.he@arm.com>
> Date: Wed, 3 Mar 2021 10:42:25 +0800
> Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> 
> When walking the page tables at a given level, and if the start
> address for the range isn't aligned for that level, we propagate
> the misalignment on each iteration at that level.
> 
> This results in the walker ignoring a number of entries (depending
> on the original misalignment) on each subsequent iteration.
> 
> Properly aligning the address at the before the next iteration

"at the before the next" ???

> addresses the issue.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
> [maz: rewrite commit message]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 4d177ce1d536..124cd2f93020 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
>  		goto out;
>  
>  	if (!table) {
> -		data->addr += kvm_granule_size(level);
> +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
>  		goto out;
>  	}

If Jia is happy with it, please feel free to add:

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03 11:08     ` Quentin Perret
  (?)
@ 2021-03-04  0:38       ` Justin He
  -1 siblings, 0 replies; 36+ messages in thread
From: Justin He @ 2021-03-04  0:38 UTC (permalink / raw)
  To: Quentin Perret, Marc Zyngier
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Will Deacon, Gavin Shan, Yanan Wang,
	linux-arm-kernel, linux-kernel

Hi Quentin and Marc
I noticed Marc had sent out new version on behalf of me, thanks for the help.
I hated the time difference, sorry for the late.

Just answer the comments below to make it clear.

> -----Original Message-----
> From: Quentin Perret <qperret@google.com>
> Sent: Wednesday, March 3, 2021 7:09 PM
> To: Marc Zyngier <maz@kernel.org>
> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
> Morse <James.Morse@arm.com>; Julien Thierry <julien.thierry.kdev@gmail.com>;
> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
> <Catalin.Marinas@arm.com>; Will Deacon <will@kernel.org>; Gavin Shan
> <gshan@redhat.com>; Yanan Wang <wangyanan55@huawei.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wednesday 03 Mar 2021 at 09:54:25 (+0000), Marc Zyngier wrote:
> > Hi Jia,
> >
> > On Wed, 03 Mar 2021 02:42:25 +0000,
> > Jia He <justin.he@arm.com> wrote:
> > >
> > > If the start addr is not aligned with the granule size of that level.
> > > loop step size should be adjusted to boundary instead of simple
> > > kvm_granual_size(level) increment. Otherwise, some mmu entries might
> miss
> > > the chance to be walked through.
> > > E.g. Assume the unmap range [data->addr, data->end] is
> > > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> >
> > When does this occur? Upgrade from page mappings to block? Swap out?
> >
> > > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> >
> > Let me see if I understand this. Assuming 4k pages, the region
> > described above spans *two* 2M entries:
> >
> > (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> > (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> >
> > (a) has no valid mapping, but (b) does. Because we fail to correctly
> > align on a block boundary when skipping (a), we also skip (b), which
> > is then left mapped.
> >
> > Did I get it right? If so, yes, this is... annoying.
> >

Yes, exactly the case

> > Understanding the circumstances this triggers in would be most
> > interesting. This current code seems to assume that we get ranges
> > aligned to mapping boundaries, but I seem to remember that the old
> > code did use the stage2_*_addr_end() helpers to deal with this case.
> >
> > Will: I don't think things have changed in that respect, right?
> 
> Indeed we should still use stage2_*_addr_end(), especially in the unmap
> path that is mentioned here, so it would be helpful to have a little bit
> more context.

Yes, stage2_pgd_addr_end() was still there but the stage2_pmd_addr_end() was removed.
> 
> > > Without this fix, userspace "segment fault" error can be easily
> > > triggered by running simple gVisor runsc cases on an Ampere Altra
> > > server:
> > >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > >
> > > In container:
> > >     for i in `seq 1 100`;do ls;done
> >
> > The workload on its own isn't that interesting. What I'd like to
> > understand is what happens on the host during that time.

Okay

> >
> > >
> > > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > > Signed-off-by: Jia He <justin.he@arm.com>
> > > ---
> > >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c
> b/arch/arm64/kvm/hyp/pgtable.c
> > > index bdf8e55ed308..4d99d07c610c 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> > >  		goto out;
> > >
> > >  	if (!table) {
> > > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> > >  		data->addr += kvm_granule_size(level);
> > >  		goto out;
> > >  	}
> >
> > It otherwise looks good to me. Quentin, Will: unless you object to
> > this, I plan to take it in the next round of fixes with
> 
> Though I'm still unsure how we hit that today, the change makes sense on
> its own I think, so no objection from me.
> 
> Thanks,
> Quentin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  0:38       ` Justin He
  0 siblings, 0 replies; 36+ messages in thread
From: Justin He @ 2021-03-04  0:38 UTC (permalink / raw)
  To: Quentin Perret, Marc Zyngier
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, Will Deacon, kvmarm

Hi Quentin and Marc
I noticed Marc had sent out new version on behalf of me, thanks for the help.
I hated the time difference, sorry for the late.

Just answer the comments below to make it clear.

> -----Original Message-----
> From: Quentin Perret <qperret@google.com>
> Sent: Wednesday, March 3, 2021 7:09 PM
> To: Marc Zyngier <maz@kernel.org>
> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
> Morse <James.Morse@arm.com>; Julien Thierry <julien.thierry.kdev@gmail.com>;
> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
> <Catalin.Marinas@arm.com>; Will Deacon <will@kernel.org>; Gavin Shan
> <gshan@redhat.com>; Yanan Wang <wangyanan55@huawei.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wednesday 03 Mar 2021 at 09:54:25 (+0000), Marc Zyngier wrote:
> > Hi Jia,
> >
> > On Wed, 03 Mar 2021 02:42:25 +0000,
> > Jia He <justin.he@arm.com> wrote:
> > >
> > > If the start addr is not aligned with the granule size of that level.
> > > loop step size should be adjusted to boundary instead of simple
> > > kvm_granual_size(level) increment. Otherwise, some mmu entries might
> miss
> > > the chance to be walked through.
> > > E.g. Assume the unmap range [data->addr, data->end] is
> > > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> >
> > When does this occur? Upgrade from page mappings to block? Swap out?
> >
> > > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> >
> > Let me see if I understand this. Assuming 4k pages, the region
> > described above spans *two* 2M entries:
> >
> > (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> > (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> >
> > (a) has no valid mapping, but (b) does. Because we fail to correctly
> > align on a block boundary when skipping (a), we also skip (b), which
> > is then left mapped.
> >
> > Did I get it right? If so, yes, this is... annoying.
> >

Yes, exactly the case

> > Understanding the circumstances this triggers in would be most
> > interesting. This current code seems to assume that we get ranges
> > aligned to mapping boundaries, but I seem to remember that the old
> > code did use the stage2_*_addr_end() helpers to deal with this case.
> >
> > Will: I don't think things have changed in that respect, right?
> 
> Indeed we should still use stage2_*_addr_end(), especially in the unmap
> path that is mentioned here, so it would be helpful to have a little bit
> more context.

Yes, stage2_pgd_addr_end() was still there but the stage2_pmd_addr_end() was removed.
> 
> > > Without this fix, userspace "segment fault" error can be easily
> > > triggered by running simple gVisor runsc cases on an Ampere Altra
> > > server:
> > >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > >
> > > In container:
> > >     for i in `seq 1 100`;do ls;done
> >
> > The workload on its own isn't that interesting. What I'd like to
> > understand is what happens on the host during that time.

Okay

> >
> > >
> > > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > > Signed-off-by: Jia He <justin.he@arm.com>
> > > ---
> > >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c
> b/arch/arm64/kvm/hyp/pgtable.c
> > > index bdf8e55ed308..4d99d07c610c 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> > >  		goto out;
> > >
> > >  	if (!table) {
> > > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> > >  		data->addr += kvm_granule_size(level);
> > >  		goto out;
> > >  	}
> >
> > It otherwise looks good to me. Quentin, Will: unless you object to
> > this, I plan to take it in the next round of fixes with
> 
> Though I'm still unsure how we hit that today, the change makes sense on
> its own I think, so no objection from me.
> 
> Thanks,
> Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  0:38       ` Justin He
  0 siblings, 0 replies; 36+ messages in thread
From: Justin He @ 2021-03-04  0:38 UTC (permalink / raw)
  To: Quentin Perret, Marc Zyngier
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Will Deacon, Gavin Shan, Yanan Wang,
	linux-arm-kernel, linux-kernel

Hi Quentin and Marc
I noticed Marc had sent out new version on behalf of me, thanks for the help.
I hated the time difference, sorry for the late.

Just answer the comments below to make it clear.

> -----Original Message-----
> From: Quentin Perret <qperret@google.com>
> Sent: Wednesday, March 3, 2021 7:09 PM
> To: Marc Zyngier <maz@kernel.org>
> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
> Morse <James.Morse@arm.com>; Julien Thierry <julien.thierry.kdev@gmail.com>;
> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
> <Catalin.Marinas@arm.com>; Will Deacon <will@kernel.org>; Gavin Shan
> <gshan@redhat.com>; Yanan Wang <wangyanan55@huawei.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wednesday 03 Mar 2021 at 09:54:25 (+0000), Marc Zyngier wrote:
> > Hi Jia,
> >
> > On Wed, 03 Mar 2021 02:42:25 +0000,
> > Jia He <justin.he@arm.com> wrote:
> > >
> > > If the start addr is not aligned with the granule size of that level.
> > > loop step size should be adjusted to boundary instead of simple
> > > kvm_granual_size(level) increment. Otherwise, some mmu entries might
> miss
> > > the chance to be walked through.
> > > E.g. Assume the unmap range [data->addr, data->end] is
> > > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> >
> > When does this occur? Upgrade from page mappings to block? Swap out?
> >
> > > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> >
> > Let me see if I understand this. Assuming 4k pages, the region
> > described above spans *two* 2M entries:
> >
> > (a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
> > (b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000
> >
> > (a) has no valid mapping, but (b) does. Because we fail to correctly
> > align on a block boundary when skipping (a), we also skip (b), which
> > is then left mapped.
> >
> > Did I get it right? If so, yes, this is... annoying.
> >

Yes, exactly the case

> > Understanding the circumstances this triggers in would be most
> > interesting. This current code seems to assume that we get ranges
> > aligned to mapping boundaries, but I seem to remember that the old
> > code did use the stage2_*_addr_end() helpers to deal with this case.
> >
> > Will: I don't think things have changed in that respect, right?
> 
> Indeed we should still use stage2_*_addr_end(), especially in the unmap
> path that is mentioned here, so it would be helpful to have a little bit
> more context.

Yes, stage2_pgd_addr_end() was still there but the stage2_pmd_addr_end() was removed.
> 
> > > Without this fix, userspace "segment fault" error can be easily
> > > triggered by running simple gVisor runsc cases on an Ampere Altra
> > > server:
> > >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > >
> > > In container:
> > >     for i in `seq 1 100`;do ls;done
> >
> > The workload on its own isn't that interesting. What I'd like to
> > understand is what happens on the host during that time.

Okay

> >
> > >
> > > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > > Signed-off-by: Jia He <justin.he@arm.com>
> > > ---
> > >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c
> b/arch/arm64/kvm/hyp/pgtable.c
> > > index bdf8e55ed308..4d99d07c610c 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> > >  		goto out;
> > >
> > >  	if (!table) {
> > > +		data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> > >  		data->addr += kvm_granule_size(level);
> > >  		goto out;
> > >  	}
> >
> > It otherwise looks good to me. Quentin, Will: unless you object to
> > this, I plan to take it in the next round of fixes with
> 
> Though I'm still unsure how we hit that today, the change makes sense on
> its own I think, so no objection from me.
> 
> Thanks,
> Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03 21:13       ` Will Deacon
  (?)
@ 2021-03-04  0:46         ` Justin He
  -1 siblings, 0 replies; 36+ messages in thread
From: Justin He @ 2021-03-04  0:46 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

Hi Marc

> -----Original Message-----
> From: Will Deacon <will@kernel.org>
> Sent: Thursday, March 4, 2021 5:13 AM
> To: Marc Zyngier <maz@kernel.org>
> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
> Morse <James.Morse@arm.com>; Julien Thierry <julien.thierry.kdev@gmail.com>;
> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
> <Catalin.Marinas@arm.com>; Gavin Shan <gshan@redhat.com>; Yanan Wang
> <wangyanan55@huawei.com>; Quentin Perret <qperret@google.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > From: Jia He <justin.he@arm.com>
> > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> >
> > When walking the page tables at a given level, and if the start
> > address for the range isn't aligned for that level, we propagate
> > the misalignment on each iteration at that level.
> >
> > This results in the walker ignoring a number of entries (depending
> > on the original misalignment) on each subsequent iteration.
> >
> > Properly aligning the address at the before the next iteration
> 
> "at the before the next" ???
> 
> > addresses the issue.
> >
> > Cc: stable@vger.kernel.org
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> infrastructure")
> > [maz: rewrite commit message]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 4d177ce1d536..124cd2f93020 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> >  		goto out;
> >
> >  	if (!table) {
> > -		data->addr += kvm_granule_size(level);
> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));

What if previous data->addr is already aligned with kvm_granule_size(level)?
Hence a deadloop? Am I missing anything else?

--
Cheers,
Justin (Jia He)

> >  		goto out;
> >  	}
> 
> If Jia is happy with it, please feel free to add:
> 
> Acked-by: Will Deacon <will@kernel.org>
> 
> Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  0:46         ` Justin He
  0 siblings, 0 replies; 36+ messages in thread
From: Justin He @ 2021-03-04  0:46 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, kvmarm

Hi Marc

> -----Original Message-----
> From: Will Deacon <will@kernel.org>
> Sent: Thursday, March 4, 2021 5:13 AM
> To: Marc Zyngier <maz@kernel.org>
> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
> Morse <James.Morse@arm.com>; Julien Thierry <julien.thierry.kdev@gmail.com>;
> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
> <Catalin.Marinas@arm.com>; Gavin Shan <gshan@redhat.com>; Yanan Wang
> <wangyanan55@huawei.com>; Quentin Perret <qperret@google.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > From: Jia He <justin.he@arm.com>
> > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> >
> > When walking the page tables at a given level, and if the start
> > address for the range isn't aligned for that level, we propagate
> > the misalignment on each iteration at that level.
> >
> > This results in the walker ignoring a number of entries (depending
> > on the original misalignment) on each subsequent iteration.
> >
> > Properly aligning the address at the before the next iteration
> 
> "at the before the next" ???
> 
> > addresses the issue.
> >
> > Cc: stable@vger.kernel.org
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> infrastructure")
> > [maz: rewrite commit message]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 4d177ce1d536..124cd2f93020 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> >  		goto out;
> >
> >  	if (!table) {
> > -		data->addr += kvm_granule_size(level);
> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));

What if previous data->addr is already aligned with kvm_granule_size(level)?
Hence a deadloop? Am I missing anything else?

--
Cheers,
Justin (Jia He)

> >  		goto out;
> >  	}
> 
> If Jia is happy with it, please feel free to add:
> 
> Acked-by: Will Deacon <will@kernel.org>
> 
> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  0:46         ` Justin He
  0 siblings, 0 replies; 36+ messages in thread
From: Justin He @ 2021-03-04  0:46 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

Hi Marc

> -----Original Message-----
> From: Will Deacon <will@kernel.org>
> Sent: Thursday, March 4, 2021 5:13 AM
> To: Marc Zyngier <maz@kernel.org>
> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
> Morse <James.Morse@arm.com>; Julien Thierry <julien.thierry.kdev@gmail.com>;
> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
> <Catalin.Marinas@arm.com>; Gavin Shan <gshan@redhat.com>; Yanan Wang
> <wangyanan55@huawei.com>; Quentin Perret <qperret@google.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
> 
> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > From: Jia He <justin.he@arm.com>
> > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> >
> > When walking the page tables at a given level, and if the start
> > address for the range isn't aligned for that level, we propagate
> > the misalignment on each iteration at that level.
> >
> > This results in the walker ignoring a number of entries (depending
> > on the original misalignment) on each subsequent iteration.
> >
> > Properly aligning the address at the before the next iteration
> 
> "at the before the next" ???
> 
> > addresses the issue.
> >
> > Cc: stable@vger.kernel.org
> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> infrastructure")
> > [maz: rewrite commit message]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 4d177ce1d536..124cd2f93020 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> kvm_pgtable_walk_data *data,
> >  		goto out;
> >
> >  	if (!table) {
> > -		data->addr += kvm_granule_size(level);
> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));

What if previous data->addr is already aligned with kvm_granule_size(level)?
Hence a deadloop? Am I missing anything else?

--
Cheers,
Justin (Jia He)

> >  		goto out;
> >  	}
> 
> If Jia is happy with it, please feel free to add:
> 
> Acked-by: Will Deacon <will@kernel.org>
> 
> Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-04  0:46         ` Justin He
  (?)
@ 2021-03-04  9:16           ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-04  9:16 UTC (permalink / raw)
  To: Justin He
  Cc: Will Deacon, kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On 2021-03-04 00:46, Justin He wrote:
> Hi Marc
> 
>> -----Original Message-----
>> From: Will Deacon <will@kernel.org>
>> Sent: Thursday, March 4, 2021 5:13 AM
>> To: Marc Zyngier <maz@kernel.org>
>> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
>> Morse <James.Morse@arm.com>; Julien Thierry 
>> <julien.thierry.kdev@gmail.com>;
>> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
>> <Catalin.Marinas@arm.com>; Gavin Shan <gshan@redhat.com>; Yanan Wang
>> <wangyanan55@huawei.com>; Quentin Perret <qperret@google.com>; 
>> linux-arm-
>> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu 
>> walking
>> 
>> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
>> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
>> > From: Jia He <justin.he@arm.com>
>> > Date: Wed, 3 Mar 2021 10:42:25 +0800
>> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
>> >
>> > When walking the page tables at a given level, and if the start
>> > address for the range isn't aligned for that level, we propagate
>> > the misalignment on each iteration at that level.
>> >
>> > This results in the walker ignoring a number of entries (depending
>> > on the original misalignment) on each subsequent iteration.
>> >
>> > Properly aligning the address at the before the next iteration
>> 
>> "at the before the next" ???
>> 
>> > addresses the issue.
>> >
>> > Cc: stable@vger.kernel.org
>> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
>> > Signed-off-by: Jia He <justin.he@arm.com>
>> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
>> infrastructure")
>> > [maz: rewrite commit message]
>> > Signed-off-by: Marc Zyngier <maz@kernel.org>
>> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
>> > ---
>> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> > index 4d177ce1d536..124cd2f93020 100644
>> > --- a/arch/arm64/kvm/hyp/pgtable.c
>> > +++ b/arch/arm64/kvm/hyp/pgtable.c
>> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
>> kvm_pgtable_walk_data *data,
>> >  		goto out;
>> >
>> >  	if (!table) {
>> > -		data->addr += kvm_granule_size(level);
>> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> What if previous data->addr is already aligned with 
> kvm_granule_size(level)?
> Hence a deadloop? Am I missing anything else?

Indeed, well spotted. I'll revert to your original suggestion
if everybody agrees...

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  9:16           ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-04  9:16 UTC (permalink / raw)
  To: Justin He
  Cc: Catalin Marinas, linux-kernel, linux-arm-kernel, Will Deacon, kvmarm

On 2021-03-04 00:46, Justin He wrote:
> Hi Marc
> 
>> -----Original Message-----
>> From: Will Deacon <will@kernel.org>
>> Sent: Thursday, March 4, 2021 5:13 AM
>> To: Marc Zyngier <maz@kernel.org>
>> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
>> Morse <James.Morse@arm.com>; Julien Thierry 
>> <julien.thierry.kdev@gmail.com>;
>> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
>> <Catalin.Marinas@arm.com>; Gavin Shan <gshan@redhat.com>; Yanan Wang
>> <wangyanan55@huawei.com>; Quentin Perret <qperret@google.com>; 
>> linux-arm-
>> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu 
>> walking
>> 
>> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
>> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
>> > From: Jia He <justin.he@arm.com>
>> > Date: Wed, 3 Mar 2021 10:42:25 +0800
>> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
>> >
>> > When walking the page tables at a given level, and if the start
>> > address for the range isn't aligned for that level, we propagate
>> > the misalignment on each iteration at that level.
>> >
>> > This results in the walker ignoring a number of entries (depending
>> > on the original misalignment) on each subsequent iteration.
>> >
>> > Properly aligning the address at the before the next iteration
>> 
>> "at the before the next" ???
>> 
>> > addresses the issue.
>> >
>> > Cc: stable@vger.kernel.org
>> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
>> > Signed-off-by: Jia He <justin.he@arm.com>
>> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
>> infrastructure")
>> > [maz: rewrite commit message]
>> > Signed-off-by: Marc Zyngier <maz@kernel.org>
>> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
>> > ---
>> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> > index 4d177ce1d536..124cd2f93020 100644
>> > --- a/arch/arm64/kvm/hyp/pgtable.c
>> > +++ b/arch/arm64/kvm/hyp/pgtable.c
>> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
>> kvm_pgtable_walk_data *data,
>> >  		goto out;
>> >
>> >  	if (!table) {
>> > -		data->addr += kvm_granule_size(level);
>> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> What if previous data->addr is already aligned with 
> kvm_granule_size(level)?
> Hence a deadloop? Am I missing anything else?

Indeed, well spotted. I'll revert to your original suggestion
if everybody agrees...

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  9:16           ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-04  9:16 UTC (permalink / raw)
  To: Justin He
  Cc: Will Deacon, kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On 2021-03-04 00:46, Justin He wrote:
> Hi Marc
> 
>> -----Original Message-----
>> From: Will Deacon <will@kernel.org>
>> Sent: Thursday, March 4, 2021 5:13 AM
>> To: Marc Zyngier <maz@kernel.org>
>> Cc: Justin He <Justin.He@arm.com>; kvmarm@lists.cs.columbia.edu; James
>> Morse <James.Morse@arm.com>; Julien Thierry 
>> <julien.thierry.kdev@gmail.com>;
>> Suzuki Poulose <Suzuki.Poulose@arm.com>; Catalin Marinas
>> <Catalin.Marinas@arm.com>; Gavin Shan <gshan@redhat.com>; Yanan Wang
>> <wangyanan55@huawei.com>; Quentin Perret <qperret@google.com>; 
>> linux-arm-
>> kernel@lists.infradead.org; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu 
>> walking
>> 
>> On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
>> > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
>> > From: Jia He <justin.he@arm.com>
>> > Date: Wed, 3 Mar 2021 10:42:25 +0800
>> > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
>> >
>> > When walking the page tables at a given level, and if the start
>> > address for the range isn't aligned for that level, we propagate
>> > the misalignment on each iteration at that level.
>> >
>> > This results in the walker ignoring a number of entries (depending
>> > on the original misalignment) on each subsequent iteration.
>> >
>> > Properly aligning the address at the before the next iteration
>> 
>> "at the before the next" ???
>> 
>> > addresses the issue.
>> >
>> > Cc: stable@vger.kernel.org
>> > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
>> > Signed-off-by: Jia He <justin.he@arm.com>
>> > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
>> infrastructure")
>> > [maz: rewrite commit message]
>> > Signed-off-by: Marc Zyngier <maz@kernel.org>
>> > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
>> > ---
>> >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> > index 4d177ce1d536..124cd2f93020 100644
>> > --- a/arch/arm64/kvm/hyp/pgtable.c
>> > +++ b/arch/arm64/kvm/hyp/pgtable.c
>> > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
>> kvm_pgtable_walk_data *data,
>> >  		goto out;
>> >
>> >  	if (!table) {
>> > -		data->addr += kvm_granule_size(level);
>> > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> What if previous data->addr is already aligned with 
> kvm_granule_size(level)?
> Hence a deadloop? Am I missing anything else?

Indeed, well spotted. I'll revert to your original suggestion
if everybody agrees...

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-04  9:16           ` Marc Zyngier
  (?)
@ 2021-03-04  9:22             ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-04  9:22 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Justin He, kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Thu, Mar 04, 2021 at 09:16:25AM +0000, Marc Zyngier wrote:
> On 2021-03-04 00:46, Justin He wrote:
> > > On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > > > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > > > From: Jia He <justin.he@arm.com>
> > > > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > > > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> > > >
> > > > When walking the page tables at a given level, and if the start
> > > > address for the range isn't aligned for that level, we propagate
> > > > the misalignment on each iteration at that level.
> > > >
> > > > This results in the walker ignoring a number of entries (depending
> > > > on the original misalignment) on each subsequent iteration.
> > > >
> > > > Properly aligning the address at the before the next iteration
> > > 
> > > "at the before the next" ???
> > > 
> > > > addresses the issue.
> > > >
> > > > Cc: stable@vger.kernel.org
> > > > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > > > Signed-off-by: Jia He <justin.he@arm.com>
> > > > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> > > infrastructure")
> > > > [maz: rewrite commit message]
> > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > > > ---
> > > >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > > index 4d177ce1d536..124cd2f93020 100644
> > > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> > > kvm_pgtable_walk_data *data,
> > > >  		goto out;
> > > >
> > > >  	if (!table) {
> > > > -		data->addr += kvm_granule_size(level);
> > > > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
> > 
> > What if previous data->addr is already aligned with
> > kvm_granule_size(level)?
> > Hence a deadloop? Am I missing anything else?
> 
> Indeed, well spotted. I'll revert to your original suggestion
> if everybody agrees...

Heh, yeah, at least one of us is awake.
For the original patch, with the updated (including typo fix) commit
message:

Acked-by: Will Deacon <will@kernel.org>

If that still counts for anything!

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  9:22             ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-04  9:22 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Justin He, Catalin Marinas, linux-kernel, linux-arm-kernel, kvmarm

On Thu, Mar 04, 2021 at 09:16:25AM +0000, Marc Zyngier wrote:
> On 2021-03-04 00:46, Justin He wrote:
> > > On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > > > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > > > From: Jia He <justin.he@arm.com>
> > > > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > > > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> > > >
> > > > When walking the page tables at a given level, and if the start
> > > > address for the range isn't aligned for that level, we propagate
> > > > the misalignment on each iteration at that level.
> > > >
> > > > This results in the walker ignoring a number of entries (depending
> > > > on the original misalignment) on each subsequent iteration.
> > > >
> > > > Properly aligning the address at the before the next iteration
> > > 
> > > "at the before the next" ???
> > > 
> > > > addresses the issue.
> > > >
> > > > Cc: stable@vger.kernel.org
> > > > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > > > Signed-off-by: Jia He <justin.he@arm.com>
> > > > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> > > infrastructure")
> > > > [maz: rewrite commit message]
> > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > > > ---
> > > >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > > index 4d177ce1d536..124cd2f93020 100644
> > > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> > > kvm_pgtable_walk_data *data,
> > > >  		goto out;
> > > >
> > > >  	if (!table) {
> > > > -		data->addr += kvm_granule_size(level);
> > > > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
> > 
> > What if previous data->addr is already aligned with
> > kvm_granule_size(level)?
> > Hence a deadloop? Am I missing anything else?
> 
> Indeed, well spotted. I'll revert to your original suggestion
> if everybody agrees...

Heh, yeah, at least one of us is awake.
For the original patch, with the updated (including typo fix) commit
message:

Acked-by: Will Deacon <will@kernel.org>

If that still counts for anything!

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  9:22             ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2021-03-04  9:22 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Justin He, kvmarm, James Morse, Julien Thierry, Suzuki Poulose,
	Catalin Marinas, Gavin Shan, Yanan Wang, Quentin Perret,
	linux-arm-kernel, linux-kernel

On Thu, Mar 04, 2021 at 09:16:25AM +0000, Marc Zyngier wrote:
> On 2021-03-04 00:46, Justin He wrote:
> > > On Wed, Mar 03, 2021 at 07:07:37PM +0000, Marc Zyngier wrote:
> > > > From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
> > > > From: Jia He <justin.he@arm.com>
> > > > Date: Wed, 3 Mar 2021 10:42:25 +0800
> > > > Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables
> > > >
> > > > When walking the page tables at a given level, and if the start
> > > > address for the range isn't aligned for that level, we propagate
> > > > the misalignment on each iteration at that level.
> > > >
> > > > This results in the walker ignoring a number of entries (depending
> > > > on the original misalignment) on each subsequent iteration.
> > > >
> > > > Properly aligning the address at the before the next iteration
> > > 
> > > "at the before the next" ???
> > > 
> > > > addresses the issue.
> > > >
> > > > Cc: stable@vger.kernel.org
> > > > Reported-by: Howard Zhang <Howard.Zhang@arm.com>
> > > > Signed-off-by: Jia He <justin.he@arm.com>
> > > > Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker
> > > infrastructure")
> > > > [maz: rewrite commit message]
> > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
> > > > ---
> > > >  arch/arm64/kvm/hyp/pgtable.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > > index 4d177ce1d536..124cd2f93020 100644
> > > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > > @@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> > > kvm_pgtable_walk_data *data,
> > > >  		goto out;
> > > >
> > > >  	if (!table) {
> > > > -		data->addr += kvm_granule_size(level);
> > > > +		data->addr = ALIGN(data->addr, kvm_granule_size(level));
> > 
> > What if previous data->addr is already aligned with
> > kvm_granule_size(level)?
> > Hence a deadloop? Am I missing anything else?
> 
> Indeed, well spotted. I'll revert to your original suggestion
> if everybody agrees...

Heh, yeah, at least one of us is awake.
For the original patch, with the updated (including typo fix) commit
message:

Acked-by: Will Deacon <will@kernel.org>

If that still counts for anything!

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
  2021-03-03  2:42 ` Jia He
  (?)
@ 2021-03-04  9:55   ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-04  9:55 UTC (permalink / raw)
  To: Jia He, kvmarm
  Cc: linux-kernel, linux-arm-kernel, Will Deacon, Catalin Marinas

On Wed, 3 Mar 2021 10:42:25 +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> [...]

Applied to fixes, thanks!

[1/1] KVM: arm64: Fix unaligned addr case in mmu walking
      commit: e85583b3f1fe62c9b371a3100c1c91af94005ca9

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  9:55   ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-04  9:55 UTC (permalink / raw)
  To: Jia He, kvmarm
  Cc: Catalin Marinas, Will Deacon, linux-kernel, linux-arm-kernel

On Wed, 3 Mar 2021 10:42:25 +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> [...]

Applied to fixes, thanks!

[1/1] KVM: arm64: Fix unaligned addr case in mmu walking
      commit: e85583b3f1fe62c9b371a3100c1c91af94005ca9

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.


_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking
@ 2021-03-04  9:55   ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2021-03-04  9:55 UTC (permalink / raw)
  To: Jia He, kvmarm
  Cc: linux-kernel, linux-arm-kernel, Will Deacon, Catalin Marinas

On Wed, 3 Mar 2021 10:42:25 +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> 
> [...]

Applied to fixes, thanks!

[1/1] KVM: arm64: Fix unaligned addr case in mmu walking
      commit: e85583b3f1fe62c9b371a3100c1c91af94005ca9

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2021-03-04  9:59 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-03  2:42 [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking Jia He
2021-03-03  2:42 ` Jia He
2021-03-03  2:42 ` Jia He
2021-03-03  9:54 ` Marc Zyngier
2021-03-03  9:54   ` Marc Zyngier
2021-03-03  9:54   ` Marc Zyngier
2021-03-03 11:08   ` Quentin Perret
2021-03-03 11:08     ` Quentin Perret
2021-03-03 11:08     ` Quentin Perret
2021-03-04  0:38     ` Justin He
2021-03-04  0:38       ` Justin He
2021-03-04  0:38       ` Justin He
2021-03-03 11:49   ` Will Deacon
2021-03-03 11:49     ` Will Deacon
2021-03-03 11:49     ` Will Deacon
2021-03-03 11:29 ` Will Deacon
2021-03-03 11:29   ` Will Deacon
2021-03-03 11:29   ` Will Deacon
2021-03-03 19:07   ` Marc Zyngier
2021-03-03 19:07     ` Marc Zyngier
2021-03-03 19:07     ` Marc Zyngier
2021-03-03 21:13     ` Will Deacon
2021-03-03 21:13       ` Will Deacon
2021-03-03 21:13       ` Will Deacon
2021-03-04  0:46       ` Justin He
2021-03-04  0:46         ` Justin He
2021-03-04  0:46         ` Justin He
2021-03-04  9:16         ` Marc Zyngier
2021-03-04  9:16           ` Marc Zyngier
2021-03-04  9:16           ` Marc Zyngier
2021-03-04  9:22           ` Will Deacon
2021-03-04  9:22             ` Will Deacon
2021-03-04  9:22             ` Will Deacon
2021-03-04  9:55 ` Marc Zyngier
2021-03-04  9:55   ` Marc Zyngier
2021-03-04  9:55   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.