netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Toshiaki Makita <toshiaki.makita1@gmail.com>
Cc: "Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"David S. Miller" <davem@davemloft.net>,
	"Jakub Kicinski" <jakub.kicinski@netronome.com>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	"John Fastabend" <john.fastabend@gmail.com>,
	netdev@vger.kernel.org, xdp-newbies@vger.kernel.org,
	bpf@vger.kernel.org, "Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Brendan Gregg" <brendan.d.gregg@gmail.com>,
	brouer@redhat.com
Subject: Re: [PATCH v2 bpf-next 1/2] xdp: Add tracepoint for bulk XDP_TX
Date: Thu, 6 Jun 2019 21:41:05 +0200	[thread overview]
Message-ID: <20190606214105.6bf2f873@carbon> (raw)
In-Reply-To: <abd43c39-afb7-acd4-688a-553cec76f55c@gmail.com>

On Thu, 6 Jun 2019 20:04:20 +0900
Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:

> On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:
> > On Wed,  5 Jun 2019 14:36:12 +0900
> > Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> >   
> >> This is introduced for admins to check what is happening on XDP_TX when
> >> bulk XDP_TX is in use, which will be first introduced in veth in next
> >> commit.  
> > 
> > Is the plan that this tracepoint 'xdp:xdp_bulk_tx' should be used by
> > all drivers?  
> 
> I guess you mean all drivers that implement similar mechanism should use 
> this? Then yes.
> (I don't think all drivers needs bulk tx mechanism though)
> 
> > (more below)
> >   
> >> Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
> >> ---
> >>   include/trace/events/xdp.h | 25 +++++++++++++++++++++++++
> >>   kernel/bpf/core.c          |  1 +
> >>   2 files changed, 26 insertions(+)
> >>
> >> diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
> >> index e95cb86..e06ea65 100644
> >> --- a/include/trace/events/xdp.h
> >> +++ b/include/trace/events/xdp.h
> >> @@ -50,6 +50,31 @@
> >>   		  __entry->ifindex)
> >>   );
> >>   
> >> +TRACE_EVENT(xdp_bulk_tx,
> >> +
> >> +	TP_PROTO(const struct net_device *dev,
> >> +		 int sent, int drops, int err),
> >> +
> >> +	TP_ARGS(dev, sent, drops, err),
> >> +
> >> +	TP_STRUCT__entry(  
> > 
> > All other tracepoints in this file starts with:
> > 
> > 		__field(int, prog_id)
> > 		__field(u32, act)
> > or
> > 		__field(int, map_id)
> > 		__field(u32, act)
> > 
> > Could you please add those?  
> 
> So... prog_id is the problem. The program can be changed while we are 
> enqueueing packets to the bulk queue, so the prog_id at flush may be an 
> unexpected one.

Hmmm... that sounds problematic, if the XDP bpf_prog for veth can
change underneath, before the flush.  Our redirect system, depend on
things being stable until the xdp_do_flush_map() operation, as will
e.g. set per-CPU (bpf_redirect_info) map_to_flush pointer (which depend
on XDP prog), and expect it to be correct/valid.


> It can be fixed by disabling NAPI when changing XDP programs. This stops 
> packet processing while changing XDP programs, but I guess it is an 
> acceptable compromise. Having said that, I'm honestly not so eager to 
> make this change, since this will require refurbishment of one of the 
> most delicate part of veth XDP, NAPI disabling/enabling mechanism.
> 
> WDYT?

Sound like a bug, if XDP bpf_prog is not stable within the NAPI poll...

 
> >> +		__field(int, ifindex)
> >> +		__field(int, drops)
> >> +		__field(int, sent)
> >> +		__field(int, err)
> >> +	),  
> > 
> > The reason is that this make is easier to attach to multiple
> > tracepoints, and extract the same value.
> > 
> > Example with bpftrace oneliner:
> > 
> > $ sudo bpftrace -e 'tracepoint:xdp:xdp_* { @action[args->act] = count(); }'

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2019-06-06 19:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-05  5:36 [PATCH v2 bpf-next 0/2] veth: Bulk XDP_TX Toshiaki Makita
2019-06-05  5:36 ` [PATCH v2 bpf-next 1/2] xdp: Add tracepoint for bulk XDP_TX Toshiaki Makita
2019-06-05  7:59   ` Jesper Dangaard Brouer
2019-06-06 11:04     ` Toshiaki Makita
2019-06-06 19:41       ` Jesper Dangaard Brouer [this message]
2019-06-07  2:22         ` Toshiaki Makita
2019-06-07  9:32           ` Jesper Dangaard Brouer
2019-06-10 11:41             ` Toshiaki Makita
2019-06-05  5:36 ` [PATCH v2 bpf-next 2/2] veth: Support " Toshiaki Makita

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190606214105.6bf2f873@carbon \
    --to=brouer@redhat.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brendan.d.gregg@gmail.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=jakub.kicinski@netronome.com \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=toke@redhat.com \
    --cc=toshiaki.makita1@gmail.com \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).