netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: "Tariq Toukan" <tariqt@mellanox.com>,
	"Jesper Dangaard Brouer" <brouer@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Toke Høiland-Jørgensen" <toke@toke.dk>,
	"toshiaki.makita1@gmail.com" <toshiaki.makita1@gmail.com>,
	"grygorii.strashko@ti.com" <grygorii.strashko@ti.com>,
	"mcroce@redhat.com" <mcroce@redhat.com>
Subject: Re: [PATCH net-next v1 08/11] xdp: tracking page_pool resources and safe removal
Date: Tue, 18 Jun 2019 16:48:48 +0300	[thread overview]
Message-ID: <20190618134847.GB5307@khorivan> (raw)
In-Reply-To: <20190618133012.GA2055@apalos>

On Tue, Jun 18, 2019 at 04:30:12PM +0300, Ilias Apalodimas wrote:
>Hi Ivan, Tariq,
>
>> >>>+
>[...]
>> >>
>> >>What would you recommend to do for the following situation:
>> >>
>> >>Same receive queue is shared between 2 network devices. The receive ring is
>> >>filled by pages from page_pool, but you don't know the actual port (ndev)
>> >>filling this ring, because a device is recognized only after packet is
>> >>received.
>> >>
>> >>The API is so that xdp rxq is bind to network device, each frame has
>> >>reference
>> >>on it, so rxq ndev must be static. That means each netdev has it's own rxq
>> >>instance even no need in it. Thus, after your changes, page must be
>> >>returned to
>> >>the pool it was taken from, or released from old pool and recycled in
>> >>new one
>> >>somehow.
>> >>
>> >>And that is inconvenience at least. It's hard to move pages between
>> >>pools w/o
>> >>performance penalty. No way to use common pool either, as unreg_rxq now
>> >>drops
>> >>the pool and 2 rxqa can't reference same pool.
>> >>
>> >
>> >Within the single netdev, separate page_pool instances are anyway
>> >created for different RX rings, working under different NAPI's.
>>
>> The circumstances are so that same RX ring is shared between 2
>> netdevs... and netdev can be known only after descriptor/packet is
>> received. Thus, while filling RX ring, there is no actual device,
>> but when packet is received it has to be recycled to appropriate
>> net device pool. Before this change there were no difference from
>> which pool the page was allocated to fill RX ring, as there were no
>> owner. After this change there is owner - netdev page pool.
>>
>> For cpsw the dma unmap is common for both netdevs and no difference
>> who is freeing the page, but there is difference which pool it's
>> freed to.
>Since 2 netdevs are sharing one queue you'll need locking right?
>(Assuming that the rx-irq per device can end up on a different core)
No, rx-irq is not per device, same queue is shared only for descs,
farther it's separate queue with separate pool. Even rx-irq is per
device, no issues, as I said, it has it's own page pool, every queue
and every ndev, no need in locking, for pools...

>We discussed that ideally page pools should be alocated per hardware queue.
This is about one hw queue shared between ndevs. Page pool is separate
for each hw queues and each ndevs ofc.

>If you indeed need locking (and pay the performance penalty anyway) i wonder if
>there's anything preventing you from keeping the same principle, i.e allocate a
>pool per queue
page pool is per queue.

>pool per queue and handle the recycling to the proper ndev internally.
>That way only the first device will be responsible of
>allocating/recycling/maintaining the pool state.
No. There is more dependencies then it looks like, see rxq_info ...
The final recycling is ended not internally.

-- 
Regards,
Ivan Khoronzhuk

  reply	other threads:[~2019-06-18 13:48 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-13 18:28 [PATCH net-next v1 00/11] xdp: page_pool fixes and in-flight accounting Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 01/11] net: page_pool: add helper function to retrieve dma addresses Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 02/11] net: page_pool: add helper function to unmap " Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 03/11] xdp: fix leak of IDA cyclic id if rhashtable_insert_slow fails Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 04/11] xdp: page_pool related fix to cpumap Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 05/11] veth: use xdp_release_frame for XDP_PASS Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 06/11] page_pool: introduce page_pool_free and use in mlx5 Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 07/11] mlx5: more strict use of page_pool API Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 08/11] xdp: tracking page_pool resources and safe removal Jesper Dangaard Brouer
2019-06-14 11:01   ` Ilias Apalodimas
2019-06-15  9:33   ` Ivan Khoronzhuk
2019-06-16 10:56     ` Tariq Toukan
2019-06-18 12:54       ` Ivan Khoronzhuk
2019-06-18 13:30         ` Ilias Apalodimas
2019-06-18 13:48           ` Ivan Khoronzhuk [this message]
2019-06-18 15:19         ` Jesper Dangaard Brouer
2019-06-18 17:54           ` Ivan Khoronzhuk
2019-06-19 11:12             ` Ivan Khoronzhuk
2019-06-13 18:28 ` [PATCH net-next v1 09/11] xdp: force mem allocator removal and periodic warning Jesper Dangaard Brouer
2019-06-15  8:59   ` Ivan Khoronzhuk
2019-06-18  8:37     ` Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 10/11] xdp: add tracepoints for XDP mem Jesper Dangaard Brouer
2019-06-13 18:28 ` [PATCH net-next v1 11/11] page_pool: add tracepoints for page_pool with details need by XDP Jesper Dangaard Brouer
2019-06-15  2:41 ` [PATCH net-next v1 00/11] xdp: page_pool fixes and in-flight accounting David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190618134847.GB5307@khorivan \
    --to=ivan.khoronzhuk@linaro.org \
    --cc=brouer@redhat.com \
    --cc=grygorii.strashko@ti.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=mcroce@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=tariqt@mellanox.com \
    --cc=toke@toke.dk \
    --cc=toshiaki.makita1@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).