linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable()
@ 2020-11-30 15:25 Lai Jiangshan
  2020-11-30 15:25 ` [PATCH 2/2] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
  2020-12-01 17:43 ` [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Dave Hansen
  0 siblings, 2 replies; 9+ messages in thread
From: Lai Jiangshan @ 2020-11-30 15:25 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

From: Lai Jiangshan <laijs@linux.alibaba.com>

The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
in pti_clone_pagetable()") handles unaligned address well for unmapped
PUD/PMD etc. But unaligned address for pmd_large() or PTI_CLONE_PMD is also
needed to be aware.

For example, when pti_clone_pagetable(start, end, PTI_CLONE_PTE) is
called with @start=@pmd_aligned_addr+100*PAGE_SIZE,
@bug_addr=@pmd_aligned_addr+x*PMD_SIZE and  @end is larger than
@bug_addr+PMD_SIZE+PAGE_SIZE.

So @bug_addr is pmd aligned. If @bug_addr is mapped as large page
and @bug_addr+PMD_SIZE is not large page. It is easy to see that
[@bug_addr+PMD_SIZE, @bug_addr+PMD_SIZE+PAGE_SIZE) is not cloned.
(In the code, @addr=@bug_addr+100*PAGE_SIZE is handled as large page,
and is advanced to @bug_addr+100*PAGE_SIZE+PMD_SIZE which is not
large page mapped and 100 pages is skipped without cloned)

Similar for PTI_CLONE_PMD when @bug_addr+100*PAGE_SIZE+PMD_SIZE
is larger than @end even @bug_addr is not large page.
In the case several pages after @bug_addr+PMD_SIZE is not cloned.

We also use addr = round_up(addr+1, PAGE_SIZE) in another branch for
consistent coding without fixing anything since the addresses are
at least PAGE_ALIGNED.

No real bug is found, this patch is just for the sake of robustness.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 arch/x86/mm/pti.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 1aab92930569..a229320515da 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -374,7 +374,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			 */
 			*target_pmd = *pmd;
 
-			addr += PMD_SIZE;
+			addr = round_up(addr + 1, PMD_SIZE);
 
 		} else if (level == PTI_CLONE_PTE) {
 
@@ -401,7 +401,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			/* Clone the PTE */
 			*target_pte = *pte;
 
-			addr += PAGE_SIZE;
+			addr = round_up(addr + 1, PAGE_SIZE);
 
 		} else {
 			BUG();
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page
  2020-11-30 15:25 [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Lai Jiangshan
@ 2020-11-30 15:25 ` Lai Jiangshan
  2020-11-30 16:37   ` Dave Hansen
  2020-12-01 17:43 ` [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Dave Hansen
  1 sibling, 1 reply; 9+ messages in thread
From: Lai Jiangshan @ 2020-11-30 15:25 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

From: Lai Jiangshan <laijs@linux.alibaba.com>

All callers don't call pti_clone_pagetable() on range that
includes pud large page(1G). If it were called in such
case, there would be bugs in the caller size, so it worth
a warning for robustness.

Also add check for pgd_large() & p4d_large() with the same
reason, and pgd_large() & p4d_large() are constant 0 which
just acts as self-comment in code without overhead.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 arch/x86/mm/pti.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index a229320515da..89366fec956b 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -321,10 +321,10 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			break;
 
 		pgd = pgd_offset_k(addr);
-		if (WARN_ON(pgd_none(*pgd)))
+		if (WARN_ON(pgd_none(*pgd) || pgd_large(*pgd)))
 			return;
 		p4d = p4d_offset(pgd, addr);
-		if (WARN_ON(p4d_none(*p4d)))
+		if (WARN_ON(p4d_none(*p4d) || p4d_large(*p4d)))
 			return;
 
 		pud = pud_offset(p4d, addr);
@@ -333,6 +333,8 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			addr = round_up(addr + 1, PUD_SIZE);
 			continue;
 		}
+		if (WARN_ON(pud_large(*pud)))
+			return;
 
 		pmd = pmd_offset(pud, addr);
 		if (pmd_none(*pmd)) {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page
  2020-11-30 15:25 ` [PATCH 2/2] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
@ 2020-11-30 16:37   ` Dave Hansen
  0 siblings, 0 replies; 9+ messages in thread
From: Dave Hansen @ 2020-11-30 16:37 UTC (permalink / raw)
  To: Lai Jiangshan, linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

On 11/30/20 7:25 AM, Lai Jiangshan wrote:
> --- a/arch/x86/mm/pti.c
> +++ b/arch/x86/mm/pti.c
> @@ -321,10 +321,10 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
>  			break;
>  
>  		pgd = pgd_offset_k(addr);
> -		if (WARN_ON(pgd_none(*pgd)))
> +		if (WARN_ON(pgd_none(*pgd) || pgd_large(*pgd)))
>  			return;
>  		p4d = p4d_offset(pgd, addr);
> -		if (WARN_ON(p4d_none(*p4d)))
> +		if (WARN_ON(p4d_none(*p4d) || p4d_large(*p4d)))
>  			return;
>  
>  		pud = pud_offset(p4d, addr);
> @@ -333,6 +333,8 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
>  			addr = round_up(addr + 1, PUD_SIZE);
>  			continue;
>  		}
> +		if (WARN_ON(pud_large(*pud)))
> +			return;

One bit of practical application missing from the changelog: Right now,
we only clone parts of the kernel image and the cpu entry area.  The cpu
entry area would be insane to map with 1G pages since it maps so many
different kinds of pages and has *small* mappings.

For the kernel image to have a 1GB area with uniform permissions seems
pretty far away to me.  It would be en even more remote possibility that
a large swath of it would need to be cloned for PTI.  Kernel text with a
non-PCID system is probably as close as we would get.  I'm also not even
sure we have the code to create 1GB mappings for parts of the image.

While I'm fine with this for robustness and self-documentation, I think
there needs to be a bit more on this in the changelog.

Also, wouldn't we be better off if we added warnings to the p*d_offset()
functions?  The real problem here, for instance, is passing a
pgd_large()==1 pgd to p4d_offset().

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable()
  2020-11-30 15:25 [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Lai Jiangshan
  2020-11-30 15:25 ` [PATCH 2/2] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
@ 2020-12-01 17:43 ` Dave Hansen
  2020-12-02  8:55   ` Lai Jiangshan
  2020-12-10 14:35   ` [PATCH V2 1/3] x86/mm/pti: handle " Lai Jiangshan
  1 sibling, 2 replies; 9+ messages in thread
From: Dave Hansen @ 2020-12-01 17:43 UTC (permalink / raw)
  To: Lai Jiangshan, linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

On 11/30/20 7:25 AM, Lai Jiangshan wrote:
> The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
> in pti_clone_pagetable()") handles unaligned address well for unmapped
> PUD/PMD etc. But unaligned address for pmd_large() or PTI_CLONE_PMD is also
> needed to be aware.

That 825d0b73cd752 changelog says:

>     pti_clone_pmds() assumes that the supplied address is either:
>     
>      - properly PUD/PMD aligned
>     or
>      - the address is actually mapped which means that independently
>        of the mapping level (PUD/PMD/PTE) the next higher mapping
>        exists.

... and that was the root of the bug.  If there was a large, unmapped
area, it would skip a PUD_SIZE or PMD_SIZE *area* instead of skipping to
the *next* pud/pmd.

The case being patched here is from a *present* PTE/PMD, so it's a
mapped area, not a hole.

That said, I think the previous changelog was wrong.  An unaligned
address to a mapped, large (2M) region followed by a smaller (4k) region
would skip too far into the 4k region.

That said, I'm not sure I like this fix.  If someone is explicitly
asking to clone a PMD (which pti_clone_pgtable() forces you to do), they
better align the address.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable()
  2020-12-01 17:43 ` [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Dave Hansen
@ 2020-12-02  8:55   ` Lai Jiangshan
  2020-12-10 14:35   ` [PATCH V2 1/3] x86/mm/pti: handle " Lai Jiangshan
  1 sibling, 0 replies; 9+ messages in thread
From: Lai Jiangshan @ 2020-12-02  8:55 UTC (permalink / raw)
  To: Dave Hansen
  Cc: LKML, Lai Jiangshan, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	X86 ML, H. Peter Anvin

On Wed, Dec 2, 2020 at 1:43 AM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 11/30/20 7:25 AM, Lai Jiangshan wrote:
> > The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
> > in pti_clone_pagetable()") handles unaligned address well for unmapped
> > PUD/PMD etc. But unaligned address for pmd_large() or PTI_CLONE_PMD is also
> > needed to be aware.
>
> That 825d0b73cd752 changelog says:
>
> >     pti_clone_pmds() assumes that the supplied address is either:
> >
> >      - properly PUD/PMD aligned
> >     or
> >      - the address is actually mapped which means that independently
> >        of the mapping level (PUD/PMD/PTE) the next higher mapping
> >        exists.
>
> ... and that was the root of the bug.  If there was a large, unmapped
> area, it would skip a PUD_SIZE or PMD_SIZE *area* instead of skipping to
> the *next* pud/pmd.
>
> The case being patched here is from a *present* PTE/PMD, so it's a
> mapped area, not a hole.
>
> That said, I think the previous changelog was wrong.  An unaligned
> address to a mapped, large (2M) region followed by a smaller (4k) region
> would skip too far into the 4k region.
>
> That said, I'm not sure I like this fix.  If someone is explicitly
> asking to clone a PMD (which pti_clone_pgtable() forces you to do), they
> better align the address.

Hello, Dave

I think I got what you mean more or less, but I don't think I can
update the patch to address all your concerns and requirements.

I know very little about the area.

Could you make new patches to replace mine.

Thanks
Lai.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone in pti_clone_pagetable()
  2020-12-01 17:43 ` [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Dave Hansen
  2020-12-02  8:55   ` Lai Jiangshan
@ 2020-12-10 14:35   ` Lai Jiangshan
  2020-12-10 14:35     ` [PATCH V2 2/3] x86/mm/pti: issue warning when mapping large pmd beyond specifid range Lai Jiangshan
                       ` (2 more replies)
  1 sibling, 3 replies; 9+ messages in thread
From: Lai Jiangshan @ 2020-12-10 14:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

From: Lai Jiangshan <laijs@linux.alibaba.com>

The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
in pti_clone_pagetable()") handles unaligned address well for unmapped
PUD/PMD etc. But unaligned address for mapped pmd also needs to
be aware.

For mapped pmd, if @addr is not aligned to PMD_SIZE, the next pmd
(PTI_CLONE_PMD or the next pmd is large) or the last ptes (PTI_CLONE_PTE)
in the next pmd will not be cloned when @end < @addr + PMD_SIZE in the
current logic in the code.

It is not a good idea to force alignment in the caller due to one of
the cases (see the comments in the code), so it just handles the alignment
in pti_clone_pagetable().

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 arch/x86/mm/pti.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 1aab92930569..7ee99ef13a99 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -342,6 +342,21 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 		}
 
 		if (pmd_large(*pmd) || level == PTI_CLONE_PMD) {
+			/*
+			 * pti_clone_kernel_text() might be called with
+			 * @start not aligned to PMD_SIZE. We need to make
+			 * it aligned, otherwise the next pmd or last ptes
+			 * are not cloned when @end < @addr + PMD_SIZE.
+			 *
+			 * We can't force pti_clone_kernel_text() to align
+			 * the @addr to PMD_SIZE when level == PTI_CLONE_PTE.
+			 * But the problem can still possible exist when the
+			 * first pmd is large. And it is not a good idea to
+			 * check whether the first pmd is large or not in the
+			 * caller, so we just simply align it here.
+			 */
+			addr = round_down(addr, PMD_SIZE);
+
 			target_pmd = pti_user_pagetable_walk_pmd(addr);
 			if (WARN_ON(!target_pmd))
 				return;
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 2/3] x86/mm/pti: issue warning when mapping large pmd beyond specifid range
  2020-12-10 14:35   ` [PATCH V2 1/3] x86/mm/pti: handle " Lai Jiangshan
@ 2020-12-10 14:35     ` Lai Jiangshan
  2020-12-10 14:35     ` [PATCH V2 3/3] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
  2020-12-18 13:00     ` [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone in pti_clone_pagetable() Lai Jiangshan
  2 siblings, 0 replies; 9+ messages in thread
From: Lai Jiangshan @ 2020-12-10 14:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

From: Lai Jiangshan <laijs@linux.alibaba.com>

When PTI_CLONE_PTE, the caller doesn't want to expose pages beyond specifid
range and it worths a warning.

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 arch/x86/mm/pti.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 7ee99ef13a99..cd6da1d42ba9 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -356,6 +356,13 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			 * caller, so we just simply align it here.
 			 */
 			addr = round_down(addr, PMD_SIZE);
+			/*
+			 * Mapping large pmd beyond [start, end) may expose
+			 * secrets to user-space when it wants to clone ptes
+			 * only.
+			 */
+			WARN_ON_ONCE(level == PTI_CLONE_PTE &&
+				     (addr < start || end < addr + PMD_SIZE));
 
 			target_pmd = pti_user_pagetable_walk_pmd(addr);
 			if (WARN_ON(!target_pmd))
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V2 3/3] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page
  2020-12-10 14:35   ` [PATCH V2 1/3] x86/mm/pti: handle " Lai Jiangshan
  2020-12-10 14:35     ` [PATCH V2 2/3] x86/mm/pti: issue warning when mapping large pmd beyond specifid range Lai Jiangshan
@ 2020-12-10 14:35     ` Lai Jiangshan
  2020-12-18 13:00     ` [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone in pti_clone_pagetable() Lai Jiangshan
  2 siblings, 0 replies; 9+ messages in thread
From: Lai Jiangshan @ 2020-12-10 14:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin

From: Lai Jiangshan <laijs@linux.alibaba.com>

Right now, we only clone parts of the kernel image and the cpu entry area.
The cpu entry area would be insane to map with 1G pages since it maps so
many different kinds of pages and has *small* mappings.

For the kernel image to have a 1GB area with uniform permissions seems
pretty far away to practice.  It would be en even more remote possibility
that a large swath of it would need to be cloned for PTI.  Kernel text
with a non-PCID system is probably as close as we would get.  I'm also
not even sure we have the code to create 1GB mappings for parts of the
image.

In other words, no caller calls pti_clone_pagetable() on range that
includes pud large page(1G) by now. If it were called in such case,
there would be bugs in the caller side or other places, so it worths
a warning for robustness.

We also add check for pgd_large() & p4d_large() with the same reason,
and pgd_large() & p4d_large() are constant 0 which just acts as the
self-comment in code without any overhead.

[ Many thanks to Dave Hansen for more elaborated changelog ]

Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
---
 arch/x86/mm/pti.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index cd6da1d42ba9..e8d2df072c5c 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -321,10 +321,10 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			break;
 
 		pgd = pgd_offset_k(addr);
-		if (WARN_ON(pgd_none(*pgd)))
+		if (WARN_ON(pgd_none(*pgd) || pgd_large(*pgd)))
 			return;
 		p4d = p4d_offset(pgd, addr);
-		if (WARN_ON(p4d_none(*p4d)))
+		if (WARN_ON(p4d_none(*p4d) || p4d_large(*p4d)))
 			return;
 
 		pud = pud_offset(p4d, addr);
@@ -333,6 +333,8 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			addr = round_up(addr + 1, PUD_SIZE);
 			continue;
 		}
+		if (WARN_ON(pud_large(*pud)))
+			return;
 
 		pmd = pmd_offset(pud, addr);
 		if (pmd_none(*pmd)) {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone in pti_clone_pagetable()
  2020-12-10 14:35   ` [PATCH V2 1/3] x86/mm/pti: handle " Lai Jiangshan
  2020-12-10 14:35     ` [PATCH V2 2/3] x86/mm/pti: issue warning when mapping large pmd beyond specifid range Lai Jiangshan
  2020-12-10 14:35     ` [PATCH V2 3/3] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
@ 2020-12-18 13:00     ` Lai Jiangshan
  2 siblings, 0 replies; 9+ messages in thread
From: Lai Jiangshan @ 2020-12-18 13:00 UTC (permalink / raw)
  To: LKML
  Cc: Lai Jiangshan, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, X86 ML,
	H. Peter Anvin

Hello, Dave Hansen

Could you help review the patches, please?

I think they meet your suggestion except for forcing alignment in the
caller.  The reason is in the code.

Thanks
Lai

On Thu, Dec 10, 2020 at 9:34 PM Lai Jiangshan <jiangshanlai@gmail.com> wrote:
>
> From: Lai Jiangshan <laijs@linux.alibaba.com>
>
> The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
> in pti_clone_pagetable()") handles unaligned address well for unmapped
> PUD/PMD etc. But unaligned address for mapped pmd also needs to
> be aware.
>
> For mapped pmd, if @addr is not aligned to PMD_SIZE, the next pmd
> (PTI_CLONE_PMD or the next pmd is large) or the last ptes (PTI_CLONE_PTE)
> in the next pmd will not be cloned when @end < @addr + PMD_SIZE in the
> current logic in the code.
>
> It is not a good idea to force alignment in the caller due to one of
> the cases (see the comments in the code), so it just handles the alignment
> in pti_clone_pagetable().
>
> Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
> ---
>  arch/x86/mm/pti.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
>
> diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
> index 1aab92930569..7ee99ef13a99 100644
> --- a/arch/x86/mm/pti.c
> +++ b/arch/x86/mm/pti.c
> @@ -342,6 +342,21 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
>                 }
>
>                 if (pmd_large(*pmd) || level == PTI_CLONE_PMD) {
> +                       /*
> +                        * pti_clone_kernel_text() might be called with
> +                        * @start not aligned to PMD_SIZE. We need to make
> +                        * it aligned, otherwise the next pmd or last ptes
> +                        * are not cloned when @end < @addr + PMD_SIZE.
> +                        *
> +                        * We can't force pti_clone_kernel_text() to align
> +                        * the @addr to PMD_SIZE when level == PTI_CLONE_PTE.
> +                        * But the problem can still possible exist when the
> +                        * first pmd is large. And it is not a good idea to
> +                        * check whether the first pmd is large or not in the
> +                        * caller, so we just simply align it here.
> +                        */
> +                       addr = round_down(addr, PMD_SIZE);
> +
>                         target_pmd = pti_user_pagetable_walk_pmd(addr);
>                         if (WARN_ON(!target_pmd))
>                                 return;
> --
> 2.19.1.6.gb485710b
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-12-18 13:01 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-30 15:25 [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Lai Jiangshan
2020-11-30 15:25 ` [PATCH 2/2] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
2020-11-30 16:37   ` Dave Hansen
2020-12-01 17:43 ` [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable() Dave Hansen
2020-12-02  8:55   ` Lai Jiangshan
2020-12-10 14:35   ` [PATCH V2 1/3] x86/mm/pti: handle " Lai Jiangshan
2020-12-10 14:35     ` [PATCH V2 2/3] x86/mm/pti: issue warning when mapping large pmd beyond specifid range Lai Jiangshan
2020-12-10 14:35     ` [PATCH V2 3/3] x86/mm/pti: warn and stop when pti_clone_pagetable() is on 1G page Lai Jiangshan
2020-12-18 13:00     ` [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone in pti_clone_pagetable() Lai Jiangshan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).