* [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
@ 2020-06-17 23:35 ` Kaiyu Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Kaiyu Zhang @ 2020-06-17 23:35 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Alex Zhang
From: Alex Zhang <zhangalex@google.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adds a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..16422acb6da8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.111.gc72c7da667-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
@ 2020-06-17 23:35 ` Kaiyu Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Kaiyu Zhang @ 2020-06-17 23:35 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Alex Zhang
From: Alex Zhang <zhangalex@google.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adds a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..16422acb6da8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.111.gc72c7da667-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
2020-06-17 22:34 ` Kaiyu Zhang
(?)
@ 2020-06-17 22:47 ` Andrew Morton
-1 siblings, 0 replies; 7+ messages in thread
From: Andrew Morton @ 2020-06-17 22:47 UTC (permalink / raw)
To: Kaiyu Zhang; +Cc: linux-mm, linux-kernel
On Wed, 17 Jun 2020 15:34:14 -0700 Kaiyu Zhang <zhangalex@google.com> wrote:
> From: Alex Zhang <zhangalex@google.com>
>
> This function implicitly assumes that the addr passed in is page aligned.
> A non page aligned addr could ultimately cause a kernel bug in
> remap_pte_range as the exit condition in the logic loop may never be
> satisfied. This patch documents the need for the requirement, as
> well as explicitly adding a check for it.
>
> ...
>
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
> /**
> * remap_pfn_range - remap kernel memory to userspace
> * @vma: user vma to map to
> - * @addr: target user address to start at
> + * @addr: target page aligned user address to start at
> * @pfn: page frame number of kernel physical memory address
> * @size: size of mapping area
> * @prot: page protection flags for this mapping
> @@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
> unsigned long remap_pfn = pfn;
> int err;
>
> + if (!PAGE_ALIGN(addr))
> + return -EINVAL;
> +
That won't work. PAGE_ALIGNED() will do so.
Also, as this is an error in the calling code it would be better to do
if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
return -EINVAL;
so that the offending code can be fixed up.
Is there any code in the kernel tree which actually has this error?
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
@ 2020-06-17 22:34 ` Kaiyu Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Kaiyu Zhang @ 2020-06-17 22:34 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Alex Zhang
From: Alex Zhang <zhangalex@google.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adding a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..9cb0a75f1555 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (!PAGE_ALIGN(addr))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.290.gba653c62da-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
@ 2020-06-17 22:34 ` Kaiyu Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Kaiyu Zhang @ 2020-06-17 22:34 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Alex Zhang
From: Alex Zhang <zhangalex@google.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adding a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..9cb0a75f1555 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (!PAGE_ALIGN(addr))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.290.gba653c62da-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
@ 2020-06-17 22:32 ` Kaiyu Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Kaiyu Zhang @ 2020-06-17 22:32 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Alex Zhang
From: Alex Zhang <zhangalex@google.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adding a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..9cb0a75f1555 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (!PAGE_ALIGN(addr))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.290.gba653c62da-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr
@ 2020-06-17 22:32 ` Kaiyu Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Kaiyu Zhang @ 2020-06-17 22:32 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel, Alex Zhang
From: Alex Zhang <zhangalex@google.com>
This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied. This patch documents the need for the requirement, as
well as explicitly adding a check for it.
Signed-off-by: Alex Zhang <zhangalex@google.com>
---
mm/memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..9cb0a75f1555 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
* @pfn: page frame number of kernel physical memory address
* @size: size of mapping area
* @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long remap_pfn = pfn;
int err;
+ if (!PAGE_ALIGN(addr))
+ return -EINVAL;
+
/*
* Physically remapped pages are special. Tell the
* rest of the world about it:
--
2.27.0.290.gba653c62da-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-06-17 23:35 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-17 23:35 [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr Kaiyu Zhang
2020-06-17 23:35 ` Kaiyu Zhang
-- strict thread matches above, loose matches on Subject: below --
2020-06-17 22:34 Kaiyu Zhang
2020-06-17 22:34 ` Kaiyu Zhang
2020-06-17 22:47 ` Andrew Morton
2020-06-17 22:32 Kaiyu Zhang
2020-06-17 22:32 ` Kaiyu Zhang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.