bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Xing <kerneljasonxing@gmail.com>
To: "Nguyen, Anthony L" <anthony.l.nguyen@intel.com>
Cc: "davem@davemloft.net" <davem@davemloft.net>,
	"andrii@kernel.org" <andrii@kernel.org>,
	"john.fastabend@gmail.com" <john.fastabend@gmail.com>,
	"daniel@iogearbox.net" <daniel@iogearbox.net>,
	"kafai@fb.com" <kafai@fb.com>,
	"hawk@kernel.org" <hawk@kernel.org>,
	"Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
	"ast@kernel.org" <ast@kernel.org>,
	"kuba@kernel.org" <kuba@kernel.org>, "yhs@fb.com" <yhs@fb.com>,
	"songliubraving@fb.com" <songliubraving@fb.com>,
	"kpsingh@kernel.org" <kpsingh@kernel.org>, lkp <lkp@intel.com>,
	"xingwanli@kuaishou.com" <xingwanli@kuaishou.com>,
	"lishujin@kuaishou.com" <lishujin@kuaishou.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>,
	"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v7] ixgbe: let the xdpdrv work with more than 64 cpus
Date: Wed, 29 Sep 2021 10:20:02 +0800	[thread overview]
Message-ID: <CAL+tcoALdQQPy+9G_azrGqSugGcNjFfYqmf72aNRPahgggeeVA@mail.gmail.com> (raw)
In-Reply-To: <a1ea0abaadc59bdbc6504a64bae594b059c26cdf.camel@intel.com>

On Wed, Sep 29, 2021 at 6:17 AM Nguyen, Anthony L
<anthony.l.nguyen@intel.com> wrote:
>
> On Thu, 2021-09-16 at 14:41 +0800, Jason Xing wrote:
> > Hello guys,
> >
> > any suggestions or comments on this v7 patch?
> >
> > Thanks,
> > Jason
> >
> > On Wed, Sep 1, 2021 at 6:12 PM <kerneljasonxing@gmail.com> wrote:
> > > From: Jason Xing <xingwanli@kuaishou.com>
> > >
> > > Originally, ixgbe driver doesn't allow the mounting of xdpdrv if
> > > the
> > > server is equipped with more than 64 cpus online. So it turns out
> > > that
> > > the loading of xdpdrv causes the "NOMEM" failure.
> > >
> > > Actually, we can adjust the algorithm and then make it work through
> > > mapping the current cpu to some xdp ring with the protect of
> > > @tx_lock.
> > >
> > > Here're some numbers before/after applying this patch with xdp-
> > > example
> > > loaded on the eth0X:
> > >
> > > As client (tx path):
> > >                      Before    After
> > > TCP_STREAM send-64   734.14    714.20
> > > TCP_STREAM send-128  1401.91   1395.05
> > > TCP_STREAM send-512  5311.67   5292.84
> > > TCP_STREAM send-1k   9277.40   9356.22 (not stable)
> > > TCP_RR     send-1    22559.75  21844.22
> > > TCP_RR     send-128  23169.54  22725.13
> > > TCP_RR     send-512  21670.91  21412.56
> > >
> > > As server (rx path):
> > >                      Before    After
> > > TCP_STREAM send-64   1416.49   1383.12
> > > TCP_STREAM send-128  3141.49   3055.50
> > > TCP_STREAM send-512  9488.73   9487.44
> > > TCP_STREAM send-1k   9491.17   9356.22 (not stable)
> > > TCP_RR     send-1    23617.74  23601.60
> > > ...
> > >
> > > Notice: the TCP_RR mode is unstable as the official document
> > > explaines.
> > >
> > > I tested many times with different parameters combined through
> > > netperf.
> > > Though the result is not that accurate, I cannot see much influence
> > > on
> > > this patch. The static key is places on the hot path, but it
> > > actually
> > > shouldn't cause a huge regression theoretically.
> > >
> > > Fixes: 33fdc82f08 ("ixgbe: add support for XDP_TX action")
>
> Hi Jason,
>
> The patch doesn't have an explicit target of net or net-next. I assume
> since you put a Fixes tag you're wanting it to go through net, however,
> this seems more like an improvement that should go through net-next.

Yes, it is like an improvement. At first I wanted to label it as net,
but it isn't a fix as you said. So I agree with you and please help me
send it to net-next.

thanks,
Jason

> Please let me know if you disagree, otherwise I will send to net-next.
>
> Thanks,
> Tony
>
> > > Reported-by: kernel test robot <lkp@intel.com>
> > > Co-developed-by: Shujin Li <lishujin@kuaishou.com>
> > > Signed-off-by: Shujin Li <lishujin@kuaishou.com>
> > > Signed-off-by: Jason Xing <xingwanli@kuaishou.com>
> >

  reply	other threads:[~2021-09-29  2:20 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-01 10:12 [PATCH v7] ixgbe: let the xdpdrv work with more than 64 cpus kerneljasonxing
2021-09-03 16:07 ` Jason Xing
2021-09-16  6:41 ` Jason Xing
2021-09-28 22:17   ` Nguyen, Anthony L
2021-09-29  2:20     ` Jason Xing [this message]
2021-09-28  4:03 ` [Intel-wired-lan] " Penigalapati, Sandeep

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAL+tcoALdQQPy+9G_azrGqSugGcNjFfYqmf72aNRPahgggeeVA@mail.gmail.com \
    --to=kerneljasonxing@gmail.com \
    --cc=andrii@kernel.org \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jesse.brandeburg@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lishujin@kuaishou.com \
    --cc=lkp@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=xingwanli@kuaishou.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).