linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* re: bpf: Run devmap xdp_prog on flush instead of bulk enqueue
@ 2021-05-27 10:43 Colin Ian King
  2021-05-27 12:58 ` Maciej Fijalkowski
  0 siblings, 1 reply; 2+ messages in thread
From: Colin Ian King @ 2021-05-27 10:43 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Hangbin Liu, Daniel Borkmann, Toke Høiland-Jørgensen,
	John Fastabend, bpf, linux-kernel

Hi,

Static analysis with Coverity on linux-next detected a minor issue that
was introduced with the following commit:

commit cb261b594b4108668e00f565184c7c221efe0359
Author: Jesper Dangaard Brouer <brouer@redhat.com>
Date:   Wed May 19 17:07:44 2021 +0800

    bpf: Run devmap xdp_prog on flush instead of bulk enqueue

The analysis is as follows:

370static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
371{
372        struct net_device *dev = bq->dev;
373        int sent = 0, drops = 0, err = 0;
374        unsigned int cnt = bq->count;
375        int to_send = cnt;
376        int i;
377
378        if (unlikely(!cnt))
379                return;
380
381        for (i = 0; i < cnt; i++) {
382                struct xdp_frame *xdpf = bq->q[i];
383
384                prefetch(xdpf);
385        }
386
387        if (bq->xdp_prog) {
388                to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q,
cnt, dev);
389                if (!to_send)
390                        goto out;
391
   Unused value (UNUSED_VALUE)
   assigned_value: Assigning value from cnt - to_send to drops here, but
that stored value is overwritten before it can be used.

392                drops = cnt - to_send;
393        }
394
395        sent = dev->netdev_ops->ndo_xdp_xmit(dev, to_send, bq->q, flags);
396        if (sent < 0) {
397                /* If ndo_xdp_xmit fails with an errno, no frames have
398                 * been xmit'ed.
399                 */
400                err = sent;
401                sent = 0;
402        }
403
404        /* If not all frames have been transmitted, it is our
405         * responsibility to free them
406         */
407        for (i = sent; unlikely(i < to_send); i++)
408                xdp_return_frame_rx_napi(bq->q[i]);
409
410out:

   value_overwrite: Overwriting previous write to drops with value from
cnt - sent.

411        drops = cnt - sent;
412        bq->count = 0;
413        trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, drops, err);
414}

drops is being calculated twice but the first value is not used. Not
sure if that was intentional or an oversight.

Colin

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: bpf: Run devmap xdp_prog on flush instead of bulk enqueue
  2021-05-27 10:43 bpf: Run devmap xdp_prog on flush instead of bulk enqueue Colin Ian King
@ 2021-05-27 12:58 ` Maciej Fijalkowski
  0 siblings, 0 replies; 2+ messages in thread
From: Maciej Fijalkowski @ 2021-05-27 12:58 UTC (permalink / raw)
  To: Colin Ian King
  Cc: Jesper Dangaard Brouer, Hangbin Liu, Daniel Borkmann,
	Toke Høiland-Jørgensen, John Fastabend, bpf,
	linux-kernel

On Thu, May 27, 2021 at 11:43:20AM +0100, Colin Ian King wrote:
> Hi,
> 
> Static analysis with Coverity on linux-next detected a minor issue that
> was introduced with the following commit:
> 
> commit cb261b594b4108668e00f565184c7c221efe0359
> Author: Jesper Dangaard Brouer <brouer@redhat.com>
> Date:   Wed May 19 17:07:44 2021 +0800
> 
>     bpf: Run devmap xdp_prog on flush instead of bulk enqueue
> 
> The analysis is as follows:
> 
> 370static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
> 371{
> 372        struct net_device *dev = bq->dev;
> 373        int sent = 0, drops = 0, err = 0;
> 374        unsigned int cnt = bq->count;
> 375        int to_send = cnt;
> 376        int i;
> 377
> 378        if (unlikely(!cnt))
> 379                return;
> 380
> 381        for (i = 0; i < cnt; i++) {
> 382                struct xdp_frame *xdpf = bq->q[i];
> 383
> 384                prefetch(xdpf);
> 385        }
> 386
> 387        if (bq->xdp_prog) {
> 388                to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q,
> cnt, dev);
> 389                if (!to_send)
> 390                        goto out;
> 391
>    Unused value (UNUSED_VALUE)
>    assigned_value: Assigning value from cnt - to_send to drops here, but
> that stored value is overwritten before it can be used.
> 
> 392                drops = cnt - to_send;
> 393        }
> 394
> 395        sent = dev->netdev_ops->ndo_xdp_xmit(dev, to_send, bq->q, flags);
> 396        if (sent < 0) {
> 397                /* If ndo_xdp_xmit fails with an errno, no frames have
> 398                 * been xmit'ed.
> 399                 */
> 400                err = sent;
> 401                sent = 0;
> 402        }
> 403
> 404        /* If not all frames have been transmitted, it is our
> 405         * responsibility to free them
> 406         */
> 407        for (i = sent; unlikely(i < to_send); i++)

FWIW at the time that I was suggesting a rewrite of bq_xmit_all we were
using the 'drops' above via:

		for (i = 0; i < cnt - drops; i++) {

So looks like now the calculation at line 392 is actually not needed.

> 408                xdp_return_frame_rx_napi(bq->q[i]);
> 409
> 410out:
> 
>    value_overwrite: Overwriting previous write to drops with value from
> cnt - sent.
> 
> 411        drops = cnt - sent;
> 412        bq->count = 0;
> 413        trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, drops, err);
> 414}
> 
> drops is being calculated twice but the first value is not used. Not
> sure if that was intentional or an oversight.
> 
> Colin

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-05-27 13:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-27 10:43 bpf: Run devmap xdp_prog on flush instead of bulk enqueue Colin Ian King
2021-05-27 12:58 ` Maciej Fijalkowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).