All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Eelco Chaudron <echaudro@redhat.com>,
	thomas.petazzoni@bootlin.com
Subject: Re: [PATCH net-next] net: mvneta: try to use in-irq pp cache in mvneta_txq_bufs_free
Date: Fri, 25 Sep 2020 13:29:00 +0200	[thread overview]
Message-ID: <CAJ0CqmV8OJoERhYktLNP7gYDwURs97JAmbsXq2jqKHhMoHk-pg@mail.gmail.com> (raw)
In-Reply-To: <20200925125213.4981cff8@carbon>

>
> On Fri, 25 Sep 2020 12:01:32 +0200
> Lorenzo Bianconi <lorenzo@kernel.org> wrote:
>
> > Try to recycle the xdp tx buffer into the in-irq page_pool cache if
> > mvneta_txq_bufs_free is executed in the NAPI context.
>
> NACK - I don't think this is safe.  That is also why I named the
> function postfix rx_napi.  The page pool->alloc.cache is associated
> with the drivers RX-queue.  The xdp_frame's that gets freed could be
> coming from a remote driver that use page_pool. This remote drivers
> RX-queue processing can run concurrently on a different CPU, than this
> drivers TXQ-cleanup.

ack, right. What about if we do it just XDP_TX use case? Like:

if (napi && buf->type == MVNETA_TYPE_XDP_TX)
   xdp_return_frame_rx_napi(buf->xdpf);
else
   xdp_return_frame(buf->xdpf);

In this way we are sure the packet is coming from local page_pool.

>
> If you want to speedup this, I instead suggest that you add a
> xdp_return_frame_bulk API.

I will look at it

Regards,
Lorenzo


>
>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
> >  1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index 14df3aec285d..646fbf4ed638 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1831,7 +1831,7 @@ static struct mvneta_tx_queue *mvneta_tx_done_policy(struct mvneta_port *pp,
> >  /* Free tx queue skbuffs */
> >  static void mvneta_txq_bufs_free(struct mvneta_port *pp,
> >                                struct mvneta_tx_queue *txq, int num,
> > -                              struct netdev_queue *nq)
> > +                              struct netdev_queue *nq, bool napi)
> >  {
> >       unsigned int bytes_compl = 0, pkts_compl = 0;
> >       int i;
> > @@ -1854,7 +1854,10 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp,
> >                       dev_kfree_skb_any(buf->skb);
> >               } else if (buf->type == MVNETA_TYPE_XDP_TX ||
> >                          buf->type == MVNETA_TYPE_XDP_NDO) {
> > -                     xdp_return_frame(buf->xdpf);
> > +                     if (napi)
> > +                             xdp_return_frame_rx_napi(buf->xdpf);
> > +                     else
> > +                             xdp_return_frame(buf->xdpf);
> >               }
> >       }
> >
> > @@ -1872,7 +1875,7 @@ static void mvneta_txq_done(struct mvneta_port *pp,
> >       if (!tx_done)
> >               return;
> >
> > -     mvneta_txq_bufs_free(pp, txq, tx_done, nq);
> > +     mvneta_txq_bufs_free(pp, txq, tx_done, nq, true);
> >
> >       txq->count -= tx_done;
> >
> > @@ -2859,7 +2862,7 @@ static void mvneta_txq_done_force(struct mvneta_port *pp,
> >       struct netdev_queue *nq = netdev_get_tx_queue(pp->dev, txq->id);
> >       int tx_done = txq->count;
> >
> > -     mvneta_txq_bufs_free(pp, txq, tx_done, nq);
> > +     mvneta_txq_bufs_free(pp, txq, tx_done, nq, false);
> >
> >       /* reset txq */
> >       txq->count = 0;
>
>
>
> --
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
>


  reply	other threads:[~2020-09-25 11:29 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-25 10:01 [PATCH net-next] net: mvneta: try to use in-irq pp cache in mvneta_txq_bufs_free Lorenzo Bianconi
2020-09-25 10:52 ` Jesper Dangaard Brouer
2020-09-25 11:29   ` Lorenzo Bianconi [this message]
2020-09-25 12:09     ` Jesper Dangaard Brouer
2020-09-25 13:05       ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJ0CqmV8OJoERhYktLNP7gYDwURs97JAmbsXq2jqKHhMoHk-pg@mail.gmail.com \
    --to=lorenzo.bianconi@redhat.com \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=echaudro@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=thomas.petazzoni@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.