All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: "Du, Frank" <frank.du@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH] net/af_xdp: fix umem map size for zero copy
Date: Tue, 30 Apr 2024 09:22:02 +0000	[thread overview]
Message-ID: <MW4PR11MB587251A8C54D8F13B79F539A8E1A2@MW4PR11MB5872.namprd11.prod.outlook.com> (raw)
In-Reply-To: <PH0PR11MB4775D5A2BFDEF110F563FBAD80142@PH0PR11MB4775.namprd11.prod.outlook.com>

> >
> > > Subject: [PATCH] net/af_xdp: fix umem map size for zero copy
> > >
> > > The current calculation assumes that the mbufs are contiguous.
> > > However, this assumption is incorrect when the memory spans across a
> huge
> > page.
> > > Correct to directly read the size from the mempool memory chunks.
> > >
> > > Signed-off-by: Frank Du <frank.du@intel.com>
> > > ---
> > >  drivers/net/af_xdp/rte_eth_af_xdp.c | 10 +++++-----
> > >  1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > > b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > > index 268a130c49..cb95d17d13 100644
> > > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > > @@ -1039,7 +1039,7 @@ eth_link_update(struct rte_eth_dev *dev
> > > __rte_unused,  }
> > >
> > >  #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
> > > -static inline uintptr_t get_base_addr(struct rte_mempool *mp,
> > > uint64_t
> > > *align)
> > > +static inline uintptr_t get_memhdr_info(struct rte_mempool *mp,
> > > +uint64_t
> > > *align, size_t *len)
> > >  {
> > >  	struct rte_mempool_memhdr *memhdr;
> > >  	uintptr_t memhdr_addr, aligned_addr; @@ -1048,6 +1048,7 @@
> static
> > > inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t
> > > *align)
> > >  	memhdr_addr = (uintptr_t)memhdr->addr;
> > >  	aligned_addr = memhdr_addr & ~(getpagesize() - 1);
> > >  	*align = memhdr_addr - aligned_addr;
> > > +	*len = memhdr->len;
> > >
> > >  	return aligned_addr;
> > >  }
> > > @@ -1125,6 +1126,7 @@ xsk_umem_info *xdp_umem_configure(struct
> > > pmd_internals *internals,
> > >  	void *base_addr = NULL;
> > >  	struct rte_mempool *mb_pool = rxq->mb_pool;
> > >  	uint64_t umem_size, align = 0;
> > > +	size_t len = 0;
> > >
> > >  	if (internals->shared_umem) {
> > >  		if (get_shared_umem(rxq, internals->if_name, &umem) < 0)
> @@
> > > -1156,10 +1158,8 @@ xsk_umem_info *xdp_umem_configure(struct
> > > pmd_internals *internals,
> > >  		}
> > >
> > >  		umem->mb_pool = mb_pool;
> > > -		base_addr = (void *)get_base_addr(mb_pool, &align);
> > > -		umem_size = (uint64_t)mb_pool->populated_size *
> > > -				(uint64_t)usr_config.frame_size +
> > > -				align;
> > > +		base_addr = (void *)get_memhdr_info(mb_pool, &align,
> > > &len);
> > > +		umem_size = (uint64_t)len + align;
> >
> > len is set to the length of the first memhdr of the mempool. There may be
> many
> > other memhdrs in the mempool. So I don't think this is the correct value to
> use for
> > calculating the entire umem size.
> 
> Current each xdp rx ring is bonded to one single umem region, it can't reuse
> the memory
> if there are multiple memhdrs in the mempool. How about adding a check on
> the number
> of the memory chunks to only allow one single memhdr mempool can be used
> here?

The UMEM needs to be a region of virtual contiguous memory. I think this can still be the case, even if the mempool has multiple memhdrs.
If we detect >1 memhdrs perhaps we need to verify that the RTE_MEMPOOL_F_NO_IOVA_CONTIG flag is not set which I think would mean that the mempool may not be virtually contiguous.

> 
> >
> > >
> > >  		ret = xsk_umem__create(&umem->umem, base_addr,
> > umem_size,
> > >  				&rxq->fq, &rxq->cq, &usr_config);
> > > --
> > > 2.34.1


  reply	other threads:[~2024-04-30  9:22 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-26  0:51 [PATCH] net/af_xdp: fix umem map size for zero copy Frank Du
2024-04-26 10:43 ` Loftus, Ciara
2024-04-28  0:46   ` Du, Frank
2024-04-30  9:22     ` Loftus, Ciara [this message]
2024-05-11  5:26 ` [PATCH v2] " Frank Du
2024-05-17 13:19   ` Loftus, Ciara
2024-05-20  1:28     ` Du, Frank
2024-05-21 15:43   ` Ferruh Yigit
2024-05-21 17:57   ` Ferruh Yigit
2024-05-22  1:25     ` Du, Frank
2024-05-22  7:26       ` Morten Brørup
2024-05-22 10:20         ` Ferruh Yigit
2024-05-23  6:56         ` Du, Frank
2024-05-23  7:40           ` Morten Brørup
2024-05-23  7:56             ` Du, Frank
2024-05-22 10:00       ` Ferruh Yigit
2024-05-22 11:03         ` Morten Brørup
2024-05-22 14:05           ` Ferruh Yigit
2024-05-23  6:53 ` [PATCH v3] " Frank Du
2024-05-23  8:07 ` [PATCH v4] " Frank Du
2024-05-23  9:22   ` Morten Brørup
2024-05-23 13:31     ` Ferruh Yigit
2024-05-24  1:05       ` Du, Frank
2024-05-24  5:30         ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW4PR11MB587251A8C54D8F13B79F539A8E1A2@MW4PR11MB5872.namprd11.prod.outlook.com \
    --to=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=frank.du@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.