All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy
@ 2020-09-10  7:56 Björn Töpel
  2020-09-10  9:24 ` Loftus, Ciara
  2020-09-15  1:37 ` Alexei Starovoitov
  0 siblings, 2 replies; 4+ messages in thread
From: Björn Töpel @ 2020-09-10  7:56 UTC (permalink / raw)
  To: netdev, bpf, ast, daniel
  Cc: Björn Töpel, maximmi, magnus.karlsson, jonathan.lemon,
	ciara.loftus

From: Björn Töpel <bjorn.topel@intel.com>

For AF_XDP sockets, there was a discrepancy between the number of of
pinned pages and the size of the umem region.

The size of the umem region is used to validate the AF_XDP descriptor
addresses. The logic that pinned the pages covered by the region only
took whole pages into consideration, creating a mismatch between the
size and pinned pages. A user could then pass AF_XDP addresses outside
the range of pinned pages, but still within the size of the region,
crashing the kernel.

This change correctly calculates the number of pages to be
pinned. Further, the size check for the aligned mode is
simplified. Now the code simply checks if the size is divisible by the
chunk size.

Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
Reported-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
 net/xdp/xdp_umem.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index e97db37354e4..b010bfde0149 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
 
 static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
 {
+	u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
 	bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
-	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
 	u64 npgs, addr = mr->addr, size = mr->len;
-	unsigned int chunks, chunks_per_page;
+	unsigned int chunks, chunks_rem;
 	int err;
 
 	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
@@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
 	if ((addr + size) < addr)
 		return -EINVAL;
 
-	npgs = size >> PAGE_SHIFT;
+	npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
+	if (npgs_rem)
+		npgs++;
 	if (npgs > U32_MAX)
 		return -EINVAL;
 
-	chunks = (unsigned int)div_u64(size, chunk_size);
+	chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
 	if (chunks == 0)
 		return -EINVAL;
 
-	if (!unaligned_chunks) {
-		chunks_per_page = PAGE_SIZE / chunk_size;
-		if (chunks < chunks_per_page || chunks % chunks_per_page)
-			return -EINVAL;
-	}
+	if (!unaligned_chunks && chunks_rem)
+		return -EINVAL;
 
 	if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
 		return -EINVAL;

base-commit: 746f534a4809e07f427f7d13d10f3a6a9641e5c3
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* RE: [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy
  2020-09-10  7:56 [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy Björn Töpel
@ 2020-09-10  9:24 ` Loftus, Ciara
  2020-09-14 17:14   ` Song Liu
  2020-09-15  1:37 ` Alexei Starovoitov
  1 sibling, 1 reply; 4+ messages in thread
From: Loftus, Ciara @ 2020-09-10  9:24 UTC (permalink / raw)
  To: Björn Töpel, netdev, bpf, ast, daniel
  Cc: Topel, Bjorn, maximmi, Karlsson, Magnus, jonathan.lemon

> 
> From: Björn Töpel <bjorn.topel@intel.com>
> 
> For AF_XDP sockets, there was a discrepancy between the number of of
> pinned pages and the size of the umem region.
> 
> The size of the umem region is used to validate the AF_XDP descriptor
> addresses. The logic that pinned the pages covered by the region only
> took whole pages into consideration, creating a mismatch between the
> size and pinned pages. A user could then pass AF_XDP addresses outside
> the range of pinned pages, but still within the size of the region,
> crashing the kernel.
> 
> This change correctly calculates the number of pages to be
> pinned. Further, the size check for the aligned mode is
> simplified. Now the code simply checks if the size is divisible by the
> chunk size.
> 
> Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> Reported-by: Ciara Loftus <ciara.loftus@intel.com>
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Thanks for the patch Björn.

Tested-by: Ciara Loftus <ciara.loftus@intel.com>

> ---
>  net/xdp/xdp_umem.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
> index e97db37354e4..b010bfde0149 100644
> --- a/net/xdp/xdp_umem.c
> +++ b/net/xdp/xdp_umem.c
> @@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct
> xdp_umem *umem)
> 
>  static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg
> *mr)
>  {
> +	u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr-
> >headroom;
>  	bool unaligned_chunks = mr->flags &
> XDP_UMEM_UNALIGNED_CHUNK_FLAG;
> -	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
>  	u64 npgs, addr = mr->addr, size = mr->len;
> -	unsigned int chunks, chunks_per_page;
> +	unsigned int chunks, chunks_rem;
>  	int err;
> 
>  	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size >
> PAGE_SIZE) {
> @@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem
> *umem, struct xdp_umem_reg *mr)
>  	if ((addr + size) < addr)
>  		return -EINVAL;
> 
> -	npgs = size >> PAGE_SHIFT;
> +	npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
> +	if (npgs_rem)
> +		npgs++;
>  	if (npgs > U32_MAX)
>  		return -EINVAL;
> 
> -	chunks = (unsigned int)div_u64(size, chunk_size);
> +	chunks = (unsigned int)div_u64_rem(size, chunk_size,
> &chunks_rem);
>  	if (chunks == 0)
>  		return -EINVAL;
> 
> -	if (!unaligned_chunks) {
> -		chunks_per_page = PAGE_SIZE / chunk_size;
> -		if (chunks < chunks_per_page || chunks %
> chunks_per_page)
> -			return -EINVAL;
> -	}
> +	if (!unaligned_chunks && chunks_rem)
> +		return -EINVAL;
> 
>  	if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
>  		return -EINVAL;
> 
> base-commit: 746f534a4809e07f427f7d13d10f3a6a9641e5c3
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy
  2020-09-10  9:24 ` Loftus, Ciara
@ 2020-09-14 17:14   ` Song Liu
  0 siblings, 0 replies; 4+ messages in thread
From: Song Liu @ 2020-09-14 17:14 UTC (permalink / raw)
  To: Loftus, Ciara
  Cc: Björn Töpel, netdev, bpf, ast, daniel, Topel, Bjorn,
	maximmi, Karlsson, Magnus, jonathan.lemon

On Thu, Sep 10, 2020 at 2:29 AM Loftus, Ciara <ciara.loftus@intel.com> wrote:
>
> >
> > From: Björn Töpel <bjorn.topel@intel.com>
> >
> > For AF_XDP sockets, there was a discrepancy between the number of of
> > pinned pages and the size of the umem region.
> >
> > The size of the umem region is used to validate the AF_XDP descriptor
> > addresses. The logic that pinned the pages covered by the region only
> > took whole pages into consideration, creating a mismatch between the
> > size and pinned pages. A user could then pass AF_XDP addresses outside
> > the range of pinned pages, but still within the size of the region,
> > crashing the kernel.
> >
> > This change correctly calculates the number of pages to be
> > pinned. Further, the size check for the aligned mode is
> > simplified. Now the code simply checks if the size is divisible by the
> > chunk size.
> >
> > Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> > Reported-by: Ciara Loftus <ciara.loftus@intel.com>
> > Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
>
> Thanks for the patch Björn.
>
> Tested-by: Ciara Loftus <ciara.loftus@intel.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy
  2020-09-10  7:56 [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy Björn Töpel
  2020-09-10  9:24 ` Loftus, Ciara
@ 2020-09-15  1:37 ` Alexei Starovoitov
  1 sibling, 0 replies; 4+ messages in thread
From: Alexei Starovoitov @ 2020-09-15  1:37 UTC (permalink / raw)
  To: Björn Töpel
  Cc: Network Development, bpf, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, maximmi, Karlsson, Magnus, Jonathan Lemon,
	Ciara Loftus

On Thu, Sep 10, 2020 at 12:56 AM Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> From: Björn Töpel <bjorn.topel@intel.com>
>
> For AF_XDP sockets, there was a discrepancy between the number of of
> pinned pages and the size of the umem region.
>
> The size of the umem region is used to validate the AF_XDP descriptor
> addresses. The logic that pinned the pages covered by the region only
> took whole pages into consideration, creating a mismatch between the
> size and pinned pages. A user could then pass AF_XDP addresses outside
> the range of pinned pages, but still within the size of the region,
> crashing the kernel.
>
> This change correctly calculates the number of pages to be
> pinned. Further, the size check for the aligned mode is
> simplified. Now the code simply checks if the size is divisible by the
> chunk size.
>
> Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> Reported-by: Ciara Loftus <ciara.loftus@intel.com>
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Applied. Thanks

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-09-15  1:37 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-10  7:56 [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy Björn Töpel
2020-09-10  9:24 ` Loftus, Ciara
2020-09-14 17:14   ` Song Liu
2020-09-15  1:37 ` Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.