All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Xing <kerneljasonxing@gmail.com>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: "Jesse Brandeburg" <jesse.brandeburg@intel.com>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
	"David Miller" <davem@davemloft.net>,
	kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
	hawk@kernel.org, john.fastabend@gmail.com, andrii@kernel.org,
	kafai@fb.com, songliubraving@fb.com, yhs@fb.com,
	kpsingh@kernel.org, brouer@redhat.com,
	intel-wired-lan@lists.osuosl.org, netdev <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	bpf@vger.kernel.org, "Jason Xing" <xingwanli@kuaishou.com>,
	"Shujin Li" <lishujin@kuaishou.com>,
	"Íñigo Huguet" <ihuguet@redhat.com>
Subject: Re: [PATCH] ixgbe: let the xdpdrv work with more than 64 cpus
Date: Tue, 24 Aug 2021 23:23:29 +0800	[thread overview]
Message-ID: <CAL+tcoDERDZqtjK1BCc0vYYwYtvgRtb8H6z2FTVbGqr+N7bVmA@mail.gmail.com> (raw)
In-Reply-To: <59dff551-2d52-5ecc-14ac-4a6ada5b1275@redhat.com>

On Tue, Aug 24, 2021 at 9:32 PM Jesper Dangaard Brouer
<jbrouer@redhat.com> wrote:
>
>
>
> On 24/08/2021 12.49, kerneljasonxing@gmail.com wrote:
> > From: Jason Xing <xingwanli@kuaishou.com>
> >
> > Originally, ixgbe driver doesn't allow the mounting of xdpdrv if the
> > server is equipped with more than 64 cpus online. So it turns out that
> > the loading of xdpdrv causes the "NOMEM" failure.
> >
> > Actually, we can adjust the algorithm and then make it work, which has
> > no harm at all, only if we set the maxmium number of xdp queues.
>
> This is not true, it can cause harm, because XDP transmission queues are
> used without locking. See drivers ndo_xdp_xmit function ixgbe_xdp_xmit().
> As driver assumption is that each CPU have its own XDP TX-queue.
>

Point taken. I indeed miss that part which would cause bad behavior if it
happens.

At this point, I think I should find all the allocation and use of XDP
related, say,
queues and rings, then adjust them all?

Let's say if the server is shipped with 128 cpus, we could map 128 cpus to 64
rings in the function ixgbe_xdp_xmit(). However, it sounds a little bit odd.

Do you think that it makes any sense?

Thanks,
Jason

> This patch is not a proper fix.
>
> I do think we need a proper fix for this issue on ixgbe.
>
>
> > Fixes: 33fdc82f08 ("ixgbe: add support for XDP_TX action")
> > Co-developed-by: Shujin Li <lishujin@kuaishou.com>
> > Signed-off-by: Shujin Li <lishujin@kuaishou.com>
> > Signed-off-by: Jason Xing <xingwanli@kuaishou.com>
> > ---
> >   drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  | 2 +-
> >   drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 ---
> >   2 files changed, 1 insertion(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > index 0218f6c..5953996 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > @@ -299,7 +299,7 @@ static void ixgbe_cache_ring_register(struct ixgbe_adapter *adapter)
> >
> >   static int ixgbe_xdp_queues(struct ixgbe_adapter *adapter)
> >   {
> > -     return adapter->xdp_prog ? nr_cpu_ids : 0;
> > +     return adapter->xdp_prog ? min_t(int, MAX_XDP_QUEUES, nr_cpu_ids) : 0;
> >   }
> >
> >   #define IXGBE_RSS_64Q_MASK  0x3F
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > index 14aea40..b36d16b 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > @@ -10130,9 +10130,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
> >                       return -EINVAL;
> >       }
> >
> > -     if (nr_cpu_ids > MAX_XDP_QUEUES)
> > -             return -ENOMEM;
> > -
> >       old_prog = xchg(&adapter->xdp_prog, prog);
> >       need_reset = (!!prog != !!old_prog);
> >
> >
>

WARNING: multiple messages have this Message-ID (diff)
From: Jason Xing <kerneljasonxing@gmail.com>
To: intel-wired-lan@osuosl.org
Subject: [Intel-wired-lan] [PATCH] ixgbe: let the xdpdrv work with more than 64 cpus
Date: Tue, 24 Aug 2021 23:23:29 +0800	[thread overview]
Message-ID: <CAL+tcoDERDZqtjK1BCc0vYYwYtvgRtb8H6z2FTVbGqr+N7bVmA@mail.gmail.com> (raw)
In-Reply-To: <59dff551-2d52-5ecc-14ac-4a6ada5b1275@redhat.com>

On Tue, Aug 24, 2021 at 9:32 PM Jesper Dangaard Brouer
<jbrouer@redhat.com> wrote:
>
>
>
> On 24/08/2021 12.49, kerneljasonxing at gmail.com wrote:
> > From: Jason Xing <xingwanli@kuaishou.com>
> >
> > Originally, ixgbe driver doesn't allow the mounting of xdpdrv if the
> > server is equipped with more than 64 cpus online. So it turns out that
> > the loading of xdpdrv causes the "NOMEM" failure.
> >
> > Actually, we can adjust the algorithm and then make it work, which has
> > no harm at all, only if we set the maxmium number of xdp queues.
>
> This is not true, it can cause harm, because XDP transmission queues are
> used without locking. See drivers ndo_xdp_xmit function ixgbe_xdp_xmit().
> As driver assumption is that each CPU have its own XDP TX-queue.
>

Point taken. I indeed miss that part which would cause bad behavior if it
happens.

At this point, I think I should find all the allocation and use of XDP
related, say,
queues and rings, then adjust them all?

Let's say if the server is shipped with 128 cpus, we could map 128 cpus to 64
rings in the function ixgbe_xdp_xmit(). However, it sounds a little bit odd.

Do you think that it makes any sense?

Thanks,
Jason

> This patch is not a proper fix.
>
> I do think we need a proper fix for this issue on ixgbe.
>
>
> > Fixes: 33fdc82f08 ("ixgbe: add support for XDP_TX action")
> > Co-developed-by: Shujin Li <lishujin@kuaishou.com>
> > Signed-off-by: Shujin Li <lishujin@kuaishou.com>
> > Signed-off-by: Jason Xing <xingwanli@kuaishou.com>
> > ---
> >   drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  | 2 +-
> >   drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 ---
> >   2 files changed, 1 insertion(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > index 0218f6c..5953996 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> > @@ -299,7 +299,7 @@ static void ixgbe_cache_ring_register(struct ixgbe_adapter *adapter)
> >
> >   static int ixgbe_xdp_queues(struct ixgbe_adapter *adapter)
> >   {
> > -     return adapter->xdp_prog ? nr_cpu_ids : 0;
> > +     return adapter->xdp_prog ? min_t(int, MAX_XDP_QUEUES, nr_cpu_ids) : 0;
> >   }
> >
> >   #define IXGBE_RSS_64Q_MASK  0x3F
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > index 14aea40..b36d16b 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> > @@ -10130,9 +10130,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
> >                       return -EINVAL;
> >       }
> >
> > -     if (nr_cpu_ids > MAX_XDP_QUEUES)
> > -             return -ENOMEM;
> > -
> >       old_prog = xchg(&adapter->xdp_prog, prog);
> >       need_reset = (!!prog != !!old_prog);
> >
> >
>

  reply	other threads:[~2021-08-24 15:24 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-24 10:49 [PATCH] ixgbe: let the xdpdrv work with more than 64 cpus kerneljasonxing
2021-08-24 10:49 ` [Intel-wired-lan] " kerneljasonxing
2021-08-24 13:32 ` Jesper Dangaard Brouer
2021-08-24 13:32   ` [Intel-wired-lan] " Jesper Dangaard Brouer
2021-08-24 15:23   ` Jason Xing [this message]
2021-08-24 15:23     ` Jason Xing
2021-08-24 15:32     ` Maciej Fijalkowski
2021-08-24 15:32       ` [Intel-wired-lan] " Maciej Fijalkowski
2021-08-25 11:59       ` Jason Xing
2021-08-25 11:59         ` [Intel-wired-lan] " Jason Xing
2021-08-25 12:02         ` [PATCH v2] " kerneljasonxing
2021-08-25 12:02           ` [Intel-wired-lan] " kerneljasonxing
2021-08-26  8:51           ` Maciej Fijalkowski
2021-08-26  8:51             ` [Intel-wired-lan] " Maciej Fijalkowski
2021-08-26 10:56             ` Jason Xing
2021-08-26 10:56               ` [Intel-wired-lan] " Jason Xing
2021-08-26 14:01               ` [PATCH v3] " kerneljasonxing
2021-08-26 14:01                 ` [Intel-wired-lan] " kerneljasonxing
2021-08-26 14:04                 ` Jason Xing
2021-08-26 14:04                   ` [Intel-wired-lan] " Jason Xing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAL+tcoDERDZqtjK1BCc0vYYwYtvgRtb8H6z2FTVbGqr+N7bVmA@mail.gmail.com \
    --to=kerneljasonxing@gmail.com \
    --cc=andrii@kernel.org \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=ihuguet@redhat.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jbrouer@redhat.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lishujin@kuaishou.com \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=xingwanli@kuaishou.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.