All of lore.kernel.org
 help / color / mirror / Atom feed
From: Saeed Mahameed <saeedm@dev.mellanox.co.il>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: netdev@vger.kernel.org, Christoph Lameter <cl@linux.com>,
	tom@herbertland.com, Alexander Duyck <alexander.duyck@gmail.com>,
	alexei.starovoitov@gmail.com, Or Gerlitz <ogerlitz@mellanox.com>,
	Or Gerlitz <gerlitz.or@gmail.com>,
	Eran Ben Elisha <eranbe@mellanox.com>,
	Rana Shahout <ranas@mellanox.com>
Subject: Re: [net-next PATCH 06/11] RFC: mlx5: RX bulking or bundling of packets before calling network stack
Date: Tue, 9 Feb 2016 13:57:41 +0200	[thread overview]
Message-ID: <CALzJLG_w1Nr9rtZ_G4n894gvN=XTbfikGfPgNc4+VBAYw2Pyug@mail.gmail.com> (raw)
In-Reply-To: <20160202211228.16315.9691.stgit@firesoul>

On Tue, Feb 2, 2016 at 11:13 PM, Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
> There are several techniques/concepts combined in this optimization.
> It is both a data-cache and instruction-cache optimization.
>
> First of all, this is primarily about delaying touching
> packet-data, which happend in eth_type_trans, until the prefetch
> have had time to fetch.  Thus, hopefully avoiding a cache-miss on
> packet data.
>
> Secondly, the instruction-cache optimization is about, not
> calling the network stack for every packet, which is pulled out
> of the RX ring.  Calling the full stack likely removes/flushes
> the instruction cache every time.
>
> Thus, have two loops, one loop pulling out packet from the RX
> ring and starting the prefetching, and the second loop calling
> eth_type_trans() and invoking the stack via napi_gro_receive().
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>
>
> Notes:
> This is the patch that gave a speed up of 6.2Mpps to 12Mpps, when
> trying to measure lowest RX level, by dropping the packets in the
> driver itself (marked drop point as comment).
Indeed looks very promising in respect of instruction-cache
optimization, but i have some doubts regarding the data-cache
optimizations (prefetch), please see my below questions.

We will take this patch and test it in house.

>
> For now, the ring is emptied upto the budget.  I don't know if it
> would be better to chunk it up more?
Not sure, according to netdevice.h :

/* Default NAPI poll() weight
 * Device drivers are strongly advised to not use bigger value
 */
#define NAPI_POLL_WEIGHT 64

we will also compare different budget values with your approach, but I
doubt it will be accepted to increase the NAPI_POLL_WEIGHT for mlx5
drivers.
furthermore increasing NAPI poll budget might cause cache overflow
with this approach since you are chunking up all "prefetch(skb->data)"
(I didn't do the math yet in regards of cache utilization with this
approach).

>         mlx5e_handle_csum(netdev, cqe, rq, skb);
>
> -       skb->protocol = eth_type_trans(skb, netdev);
> -
mlx5e_handle_csum also access the skb->data in is_first_ethertype_ip
function, but i think it is not interesting since this is not the
common case,
e.g: for the none common case of L4 traffic with no HW checksum
offload you won't benefit from this optimization since we access the
skb->data to know the L3 header type, and this can be fixed in driver
code to check the CQE meta data for these fields instead of accessing
the skb->data, but I will need to look further into that.

> @@ -252,7 +257,6 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
>                 wqe_counter    = be16_to_cpu(wqe_counter_be);
>                 wqe            = mlx5_wq_ll_get_wqe(&rq->wq, wqe_counter);
>                 skb            = rq->skb[wqe_counter];
> -               prefetch(skb->data);
>                 rq->skb[wqe_counter] = NULL;
>
>                 dma_unmap_single(rq->pdev,
> @@ -265,16 +269,27 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
>                         dev_kfree_skb(skb);
>                         goto wq_ll_pop;
>                 }
> +               prefetch(skb->data);
is this optimal for all CPU archs ? is it ok to use up to 64 cache
lines at once ?

  reply	other threads:[~2016-02-09 12:15 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-23 12:46 [PATCH 0/4] net: mitigating kmem_cache slowpath for network stack in NAPI context Jesper Dangaard Brouer
2015-10-23 12:46 ` Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 1/4] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2015-10-23 12:46   ` Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 2/4] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 3/4] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2015-10-23 12:46   ` Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 4/4] net: bulk alloc and reuse of SKBs in NAPI context Jesper Dangaard Brouer
2015-10-27  1:09 ` [PATCH 0/4] net: mitigating kmem_cache slowpath for network stack " David Miller
2016-02-02 21:11 ` [net-next PATCH 00/11] net: mitigating kmem_cache slowpath and BoF discussion patches Jesper Dangaard Brouer
2016-02-02 21:11   ` [net-next PATCH 01/11] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2016-02-02 21:11   ` [net-next PATCH 02/11] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2016-02-02 21:11   ` [net-next PATCH 03/11] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2016-02-02 21:12   ` [net-next PATCH 04/11] net: bulk alloc and reuse of SKBs in NAPI context Jesper Dangaard Brouer
2016-02-03  0:52     ` Alexei Starovoitov
2016-02-03 10:38       ` Jesper Dangaard Brouer
2016-02-02 21:12   ` [net-next PATCH 05/11] mlx5: use napi_*_skb APIs to get bulk alloc and free Jesper Dangaard Brouer
2016-02-02 21:13   ` [net-next PATCH 06/11] RFC: mlx5: RX bulking or bundling of packets before calling network stack Jesper Dangaard Brouer
2016-02-09 11:57     ` Saeed Mahameed [this message]
2016-02-10 20:26       ` Jesper Dangaard Brouer
2016-02-16  0:01         ` Saeed Mahameed
2016-02-02 21:13   ` [net-next PATCH 07/11] net: introduce napi_alloc_skb_hint() for more use-cases Jesper Dangaard Brouer
2016-02-02 22:29     ` kbuild test robot
2016-02-02 21:14   ` [net-next PATCH 08/11] mlx5: hint the NAPI alloc skb API about the expected bulk size Jesper Dangaard Brouer
2016-02-02 21:14   ` [net-next PATCH 09/11] RFC: dummy: bulk free SKBs Jesper Dangaard Brouer
2016-02-02 21:15   ` [net-next PATCH 10/11] RFC: net: API for RX handover of multiple SKBs to stack Jesper Dangaard Brouer
2016-02-02 21:15   ` [net-next PATCH 11/11] RFC: net: RPS bulk enqueue to backlog Jesper Dangaard Brouer
2016-02-07 19:25   ` [net-next PATCH 00/11] net: mitigating kmem_cache slowpath and BoF discussion patches David Miller
2016-02-08 12:14     ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
2016-02-08 12:14       ` Jesper Dangaard Brouer
2016-02-08 12:14       ` [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2016-02-08 12:15       ` [net-next PATCH V2 2/3] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2016-02-08 12:15       ` [net-next PATCH V2 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2016-02-11 16:59       ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath David Miller
2016-02-13 11:12       ` Tilman Schmidt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALzJLG_w1Nr9rtZ_G4n894gvN=XTbfikGfPgNc4+VBAYw2Pyug@mail.gmail.com' \
    --to=saeedm@dev.mellanox.co.il \
    --cc=alexander.duyck@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=eranbe@mellanox.com \
    --cc=gerlitz.or@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@mellanox.com \
    --cc=ranas@mellanox.com \
    --cc=tom@herbertland.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.