From: Alexander Lobakin <aleksander.lobakin@intel.com> To: Ilias Apalodimas <ilias.apalodimas@linaro.org> Cc: Jakub Kicinski <kuba@kernel.org>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Paolo Abeni <pabeni@redhat.com>, "Maciej Fijalkowski" <maciej.fijalkowski@intel.com>, Magnus Karlsson <magnus.karlsson@intel.com>, Michal Kubiak <michal.kubiak@intel.com>, "Larysa Zaremba" <larysa.zaremba@intel.com>, Jesper Dangaard Brouer <hawk@kernel.org>, Christoph Hellwig <hch@lst.de>, <netdev@vger.kernel.org>, <intel-wired-lan@lists.osuosl.org>, <linux-kernel@vger.kernel.org> Subject: Re: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers Date: Thu, 18 May 2023 15:53:43 +0200 [thread overview] Message-ID: <b19f43b8-f973-2627-3ef1-ce53007ece0f@intel.com> (raw) In-Reply-To: <ZGXNzX77/5cXqAhe@hera> From: Ilias Apalodimas <ilias.apalodimas@linaro.org> Date: Thu, 18 May 2023 10:03:41 +0300 > Hi all, > >> On Wed, May 17, 2023 at 09:12:11PM -0700, Jakub Kicinski wrote: >> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote: >>> Each driver is responsible for syncing buffers written by HW for CPU >>> before accessing them. Almost each PP-enabled driver uses the same >>> pattern, which could be shorthanded into a static inline to make driver >>> code a little bit more compact. [...] >>> + dma_sync_single_range_for_cpu(pool->p.dev, >>> + page_pool_get_dma_addr(page), >>> + pool->p.offset, dma_sync_size, >>> + page_pool_get_dma_dir(pool)); >> >> Likely a dumb question but why does this exist? >> Is there a case where the "maybe" version is not safe? >> > > I got similar concerns here. Syncing for the cpu is currently a > responsibility for the driver. The reason for having an automated DMA sync > is that we know when we allocate buffers for the NIC to consume so we can > safely sync them accordingly. I am fine having a page pool version for the > cpu sync, but do we really have to check the pp flags for that? IOW if you > are at the point that you need to sync a buffer for the cpu *someone* > already mapped it for you. Regardsless of who mapped it the sync is > identical The flag in the "maybe" version is the continuation of the shortcut from 6/11. If the flag is not set, but you asked PP to do syncs, that means it enabled the shortcut to not go through function call ladders for nothing. The ladder is basically the same for sync-for-CPU as the one described in 6/11 for sync-for-dev. I could place that in the driver, but I feel like it's better to have that one level up to reduce boilerplating. > >>> +} >>> + >>> +/** >>> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed >>> + * @pool: page_pool which this page belongs to >>> + * @page: page to sync >>> + * @dma_sync_size: size of the data written to the page >>> + * >>> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.). >>> + */ >>> +static inline void >>> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool, >>> + const struct page *page, u32 dma_sync_size) >>> +{ >>> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) >>> + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size); >>> +} > > Thanks > /Ilias Thanks, Olek
WARNING: multiple messages have this Message-ID (diff)
From: Alexander Lobakin <aleksander.lobakin@intel.com> To: Ilias Apalodimas <ilias.apalodimas@linaro.org> Cc: Jesper Dangaard Brouer <hawk@kernel.org>, Larysa Zaremba <larysa.zaremba@intel.com>, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig <hch@lst.de>, Eric Dumazet <edumazet@google.com>, Michal Kubiak <michal.kubiak@intel.com>, intel-wired-lan@lists.osuosl.org, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>, "David S. Miller" <davem@davemloft.net>, Magnus Karlsson <magnus.karlsson@intel.com> Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers Date: Thu, 18 May 2023 15:53:43 +0200 [thread overview] Message-ID: <b19f43b8-f973-2627-3ef1-ce53007ece0f@intel.com> (raw) In-Reply-To: <ZGXNzX77/5cXqAhe@hera> From: Ilias Apalodimas <ilias.apalodimas@linaro.org> Date: Thu, 18 May 2023 10:03:41 +0300 > Hi all, > >> On Wed, May 17, 2023 at 09:12:11PM -0700, Jakub Kicinski wrote: >> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote: >>> Each driver is responsible for syncing buffers written by HW for CPU >>> before accessing them. Almost each PP-enabled driver uses the same >>> pattern, which could be shorthanded into a static inline to make driver >>> code a little bit more compact. [...] >>> + dma_sync_single_range_for_cpu(pool->p.dev, >>> + page_pool_get_dma_addr(page), >>> + pool->p.offset, dma_sync_size, >>> + page_pool_get_dma_dir(pool)); >> >> Likely a dumb question but why does this exist? >> Is there a case where the "maybe" version is not safe? >> > > I got similar concerns here. Syncing for the cpu is currently a > responsibility for the driver. The reason for having an automated DMA sync > is that we know when we allocate buffers for the NIC to consume so we can > safely sync them accordingly. I am fine having a page pool version for the > cpu sync, but do we really have to check the pp flags for that? IOW if you > are at the point that you need to sync a buffer for the cpu *someone* > already mapped it for you. Regardsless of who mapped it the sync is > identical The flag in the "maybe" version is the continuation of the shortcut from 6/11. If the flag is not set, but you asked PP to do syncs, that means it enabled the shortcut to not go through function call ladders for nothing. The ladder is basically the same for sync-for-CPU as the one described in 6/11 for sync-for-dev. I could place that in the driver, but I feel like it's better to have that one level up to reduce boilerplating. > >>> +} >>> + >>> +/** >>> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed >>> + * @pool: page_pool which this page belongs to >>> + * @page: page to sync >>> + * @dma_sync_size: size of the data written to the page >>> + * >>> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.). >>> + */ >>> +static inline void >>> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool, >>> + const struct page *page, u32 dma_sync_size) >>> +{ >>> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) >>> + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size); >>> +} > > Thanks > /Ilias Thanks, Olek _______________________________________________ Intel-wired-lan mailing list Intel-wired-lan@osuosl.org https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2023-05-18 14:16 UTC|newest] Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-05-16 16:18 [PATCH net-next 00/11] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-16 16:18 ` [PATCH net-next 01/11] net: intel: introduce Intel Ethernet common library Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-16 16:18 ` [PATCH net-next 02/11] iavf: kill "legacy-rx" for good Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-16 16:18 ` [PATCH net-next 03/11] iavf: optimize Rx buffer allocation a bunch Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] [PATCH net-next 04/11] iavf: remove page splitting/recycling Alexander Lobakin 2023-05-16 16:18 ` Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] [PATCH net-next 05/11] iavf: always use a full order-0 page Alexander Lobakin 2023-05-16 16:18 ` Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible Alexander Lobakin 2023-05-16 16:18 ` Alexander Lobakin 2023-05-17 8:14 ` Christoph Hellwig 2023-05-17 8:14 ` [Intel-wired-lan] " Christoph Hellwig 2023-05-18 13:26 ` Alexander Lobakin 2023-05-18 13:26 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-18 4:08 ` Jakub Kicinski 2023-05-18 4:08 ` [Intel-wired-lan] " Jakub Kicinski 2023-05-18 4:54 ` Yunsheng Lin 2023-05-18 4:54 ` [Intel-wired-lan] " Yunsheng Lin 2023-05-18 13:29 ` Alexander Lobakin 2023-05-18 13:29 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-18 13:34 ` Alexander Lobakin 2023-05-18 13:34 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-18 15:02 ` Jakub Kicinski 2023-05-18 15:02 ` Jakub Kicinski 2023-05-16 16:18 ` [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-18 4:12 ` Jakub Kicinski 2023-05-18 4:12 ` [Intel-wired-lan] " Jakub Kicinski 2023-05-18 7:03 ` Ilias Apalodimas 2023-05-18 7:03 ` [Intel-wired-lan] " Ilias Apalodimas 2023-05-18 13:53 ` Alexander Lobakin [this message] 2023-05-18 13:53 ` Alexander Lobakin 2023-05-18 13:45 ` Alexander Lobakin 2023-05-18 13:45 ` Alexander Lobakin 2023-05-18 14:56 ` Jakub Kicinski 2023-05-18 14:56 ` Jakub Kicinski 2023-05-18 15:41 ` Alexander Lobakin 2023-05-18 15:41 ` Alexander Lobakin 2023-05-18 20:36 ` Jakub Kicinski 2023-05-18 20:36 ` Jakub Kicinski 2023-05-19 13:56 ` Alexander Lobakin 2023-05-19 13:56 ` Alexander Lobakin 2023-05-19 20:45 ` Jakub Kicinski 2023-05-19 20:45 ` Jakub Kicinski 2023-05-22 13:48 ` Alexander Lobakin 2023-05-22 13:48 ` Alexander Lobakin 2023-05-22 15:27 ` Magnus Karlsson 2023-05-22 15:27 ` Magnus Karlsson 2023-05-18 4:19 ` Jakub Kicinski 2023-05-18 4:19 ` [Intel-wired-lan] " Jakub Kicinski 2023-05-16 16:18 ` [PATCH net-next 08/11] iavf: switch to Page Pool Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-23 22:42 ` David Christensen 2023-05-23 22:42 ` [Intel-wired-lan] " David Christensen 2023-05-25 11:08 ` Alexander Lobakin 2023-05-25 11:08 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-31 20:18 ` David Christensen 2023-06-02 13:25 ` Alexander Lobakin 2023-06-02 13:25 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-16 16:18 ` [PATCH net-next 09/11] libie: add common queue stats Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] [PATCH net-next 10/11] libie: add per-queue Page Pool stats Alexander Lobakin 2023-05-16 16:18 ` Alexander Lobakin 2023-05-18 4:19 ` Jakub Kicinski 2023-05-18 4:19 ` [Intel-wired-lan] " Jakub Kicinski 2023-05-18 13:47 ` Alexander Lobakin 2023-05-18 13:47 ` [Intel-wired-lan] " Alexander Lobakin 2023-05-22 15:05 ` Paul Menzel 2023-05-22 15:05 ` Paul Menzel 2023-05-22 15:32 ` Alexander Lobakin 2023-05-22 15:32 ` Alexander Lobakin 2023-05-16 16:18 ` [Intel-wired-lan] [PATCH net-next 11/11] iavf: switch queue stats to libie Alexander Lobakin 2023-05-16 16:18 ` Alexander Lobakin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=b19f43b8-f973-2627-3ef1-ce53007ece0f@intel.com \ --to=aleksander.lobakin@intel.com \ --cc=davem@davemloft.net \ --cc=edumazet@google.com \ --cc=hawk@kernel.org \ --cc=hch@lst.de \ --cc=ilias.apalodimas@linaro.org \ --cc=intel-wired-lan@lists.osuosl.org \ --cc=kuba@kernel.org \ --cc=larysa.zaremba@intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=maciej.fijalkowski@intel.com \ --cc=magnus.karlsson@intel.com \ --cc=michal.kubiak@intel.com \ --cc=netdev@vger.kernel.org \ --cc=pabeni@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.