From: Mina Almasry <almasrymina@google.com> To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: "Mina Almasry" <almasrymina@google.com>, "David S. Miller" <davem@davemloft.net>, "Eric Dumazet" <edumazet@google.com>, "Jakub Kicinski" <kuba@kernel.org>, "Paolo Abeni" <pabeni@redhat.com>, "Jesper Dangaard Brouer" <hawk@kernel.org>, "Ilias Apalodimas" <ilias.apalodimas@linaro.org>, "Arnd Bergmann" <arnd@arndb.de>, "David Ahern" <dsahern@kernel.org>, "Willem de Bruijn" <willemdebruijn.kernel@gmail.com>, "Shuah Khan" <shuah@kernel.org>, "Sumit Semwal" <sumit.semwal@linaro.org>, "Christian König" <christian.koenig@amd.com>, "Shakeel Butt" <shakeelb@google.com>, "Jeroen de Borst" <jeroendb@google.com>, "Praveen Kaligineedi" <pkaligineedi@google.com>, "Willem de Bruijn" <willemb@google.com>, "Kaiyuan Zhang" <kaiyuanz@google.com> Subject: [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider Date: Sun, 5 Nov 2023 18:44:05 -0800 [thread overview] Message-ID: <20231106024413.2801438-7-almasrymina@google.com> (raw) In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Implement a memory provider that allocates dmabuf devmem page_pool_iovs. Support of PP_FLAG_DMA_MAP and PP_FLAG_DMA_SYNC_DEV is omitted for simplicity. The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the page_pool_params. The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed. Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com> Signed-off-by: Mina Almasry <almasrymina@google.com> --- include/net/page_pool/helpers.h | 40 +++++++++++++++++ include/net/page_pool/types.h | 10 +++++ net/core/page_pool.c | 76 +++++++++++++++++++++++++++++++++ 3 files changed, 126 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -53,6 +53,7 @@ #define _NET_PAGE_POOL_HELPERS_H #include <net/page_pool/types.h> +#include <net/net_debug.h> #ifdef CONFIG_PAGE_POOL_STATS int page_pool_ethtool_stats_get_count(void); @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; } +static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{ + return refcount_read(&ppiov->refcount); +} + +static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + refcount_add(count, &ppiov->refcount); +} + +void __page_pool_iov_free(struct page_pool_iov *ppiov); + +static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + if (!refcount_sub_and_test(count, &ppiov->refcount)) + return; + + __page_pool_iov_free(ppiov); +} + +/* page pool mm helpers */ + +static inline bool page_is_page_pool_iov(const struct page *page) +{ + return (unsigned long)page & PP_DEVMEM; +} + +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return (struct page_pool_iov *)((unsigned long)page & + ~PP_DEVMEM); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 64386325d965..1e67f9466250 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -124,6 +124,7 @@ struct mem_provider; enum pp_memory_provider_type { __PP_MP_NONE, /* Use system allocator directly */ + PP_MP_DMABUF_DEVMEM, /* dmabuf devmem provider */ }; struct pp_memory_provider_ops { @@ -133,8 +134,15 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); }; +extern const struct pp_memory_provider_ops dmabuf_devmem_ops; + /* page_pool_iov support */ +/* We overload the LSB of the struct page pointer to indicate whether it's + * a page or page_pool_iov. + */ +#define PP_DEVMEM 0x01UL + /* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist * entry from the dmabuf is inserted into the genpool as a chunk, and needs * this owner struct to keep track of some metadata necessary to create @@ -158,6 +166,8 @@ struct page_pool_iov { struct dmabuf_genpool_chunk_owner *owner; refcount_t refcount; + + struct page_pool *pp; }; struct page_pool { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7ea1f4682479..138ddea0b28f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -20,6 +20,7 @@ #include <linux/poison.h> #include <linux/ethtool.h> #include <linux/netdevice.h> +#include <linux/genalloc.h> #include <trace/events/page_pool.h> @@ -231,6 +232,9 @@ static int page_pool_init(struct page_pool *pool, switch (pool->p.memory_provider) { case __PP_MP_NONE: break; + case PP_MP_DMABUF_DEVMEM: + pool->mp_ops = &dmabuf_devmem_ops; + break; default: err = -EINVAL; goto free_ptr_ring; @@ -996,3 +1000,75 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +void __page_pool_iov_free(struct page_pool_iov *ppiov) +{ + if (ppiov->pp->mp_ops != &dmabuf_devmem_ops) + return; + + netdev_free_devmem(ppiov); +} +EXPORT_SYMBOL_GPL(__page_pool_iov_free); + +/*** "Dmabuf devmem memory provider" ***/ + +static int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (pool->p.flags & PP_FLAG_DMA_MAP || + pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + return -EOPNOTSUPP; + + netdev_devmem_binding_get(binding); + return 0; +} + +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, + gfp_t gfp) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + struct page_pool_iov *ppiov; + + ppiov = netdev_alloc_devmem(binding); + if (!ppiov) + return NULL; + + ppiov->pp = pool; + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, (struct page *)ppiov, + pool->pages_state_hold_cnt); + return (struct page *)((unsigned long)ppiov | PP_DEVMEM); +} + +static void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + netdev_devmem_binding_put(binding); +} + +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + struct page *page) +{ + struct page_pool_iov *ppiov; + + if (WARN_ON_ONCE(!page_is_page_pool_iov(page))) + return false; + + ppiov = page_to_page_pool_iov(page); + page_pool_iov_put_many(ppiov, 1); + /* We don't want the page pool put_page()ing our page_pool_iovs. */ + return false; +} + +const struct pp_memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_pages = mp_dmabuf_devmem_alloc_pages, + .release_page = mp_dmabuf_devmem_release_page, +}; +EXPORT_SYMBOL(dmabuf_devmem_ops); -- 2.42.0.869.gea05f2083d-goog
WARNING: multiple messages have this Message-ID (diff)
From: Mina Almasry <almasrymina@google.com> To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: "Mina Almasry" <almasrymina@google.com>, "Willem de Bruijn" <willemdebruijn.kernel@gmail.com>, "Kaiyuan Zhang" <kaiyuanz@google.com>, "Jeroen de Borst" <jeroendb@google.com>, "Jesper Dangaard Brouer" <hawk@kernel.org>, "Arnd Bergmann" <arnd@arndb.de>, "Christian König" <christian.koenig@amd.com>, "David Ahern" <dsahern@kernel.org>, "Ilias Apalodimas" <ilias.apalodimas@linaro.org>, "Willem de Bruijn" <willemb@google.com>, "Sumit Semwal" <sumit.semwal@linaro.org>, "Eric Dumazet" <edumazet@google.com>, "Shakeel Butt" <shakeelb@google.com>, "Praveen Kaligineedi" <pkaligineedi@google.com>, "Jakub Kicinski" <kuba@kernel.org>, "Paolo Abeni" <pabeni@redhat.com>, "Shuah Khan" <shuah@kernel.org>, "David S. Miller" <davem@davemloft.net> Subject: [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider Date: Sun, 5 Nov 2023 18:44:05 -0800 [thread overview] Message-ID: <20231106024413.2801438-7-almasrymina@google.com> (raw) In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Implement a memory provider that allocates dmabuf devmem page_pool_iovs. Support of PP_FLAG_DMA_MAP and PP_FLAG_DMA_SYNC_DEV is omitted for simplicity. The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the page_pool_params. The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed. Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com> Signed-off-by: Mina Almasry <almasrymina@google.com> --- include/net/page_pool/helpers.h | 40 +++++++++++++++++ include/net/page_pool/types.h | 10 +++++ net/core/page_pool.c | 76 +++++++++++++++++++++++++++++++++ 3 files changed, 126 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -53,6 +53,7 @@ #define _NET_PAGE_POOL_HELPERS_H #include <net/page_pool/types.h> +#include <net/net_debug.h> #ifdef CONFIG_PAGE_POOL_STATS int page_pool_ethtool_stats_get_count(void); @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; } +static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{ + return refcount_read(&ppiov->refcount); +} + +static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + refcount_add(count, &ppiov->refcount); +} + +void __page_pool_iov_free(struct page_pool_iov *ppiov); + +static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + if (!refcount_sub_and_test(count, &ppiov->refcount)) + return; + + __page_pool_iov_free(ppiov); +} + +/* page pool mm helpers */ + +static inline bool page_is_page_pool_iov(const struct page *page) +{ + return (unsigned long)page & PP_DEVMEM; +} + +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return (struct page_pool_iov *)((unsigned long)page & + ~PP_DEVMEM); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 64386325d965..1e67f9466250 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -124,6 +124,7 @@ struct mem_provider; enum pp_memory_provider_type { __PP_MP_NONE, /* Use system allocator directly */ + PP_MP_DMABUF_DEVMEM, /* dmabuf devmem provider */ }; struct pp_memory_provider_ops { @@ -133,8 +134,15 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); }; +extern const struct pp_memory_provider_ops dmabuf_devmem_ops; + /* page_pool_iov support */ +/* We overload the LSB of the struct page pointer to indicate whether it's + * a page or page_pool_iov. + */ +#define PP_DEVMEM 0x01UL + /* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist * entry from the dmabuf is inserted into the genpool as a chunk, and needs * this owner struct to keep track of some metadata necessary to create @@ -158,6 +166,8 @@ struct page_pool_iov { struct dmabuf_genpool_chunk_owner *owner; refcount_t refcount; + + struct page_pool *pp; }; struct page_pool { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7ea1f4682479..138ddea0b28f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -20,6 +20,7 @@ #include <linux/poison.h> #include <linux/ethtool.h> #include <linux/netdevice.h> +#include <linux/genalloc.h> #include <trace/events/page_pool.h> @@ -231,6 +232,9 @@ static int page_pool_init(struct page_pool *pool, switch (pool->p.memory_provider) { case __PP_MP_NONE: break; + case PP_MP_DMABUF_DEVMEM: + pool->mp_ops = &dmabuf_devmem_ops; + break; default: err = -EINVAL; goto free_ptr_ring; @@ -996,3 +1000,75 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +void __page_pool_iov_free(struct page_pool_iov *ppiov) +{ + if (ppiov->pp->mp_ops != &dmabuf_devmem_ops) + return; + + netdev_free_devmem(ppiov); +} +EXPORT_SYMBOL_GPL(__page_pool_iov_free); + +/*** "Dmabuf devmem memory provider" ***/ + +static int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (pool->p.flags & PP_FLAG_DMA_MAP || + pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + return -EOPNOTSUPP; + + netdev_devmem_binding_get(binding); + return 0; +} + +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, + gfp_t gfp) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + struct page_pool_iov *ppiov; + + ppiov = netdev_alloc_devmem(binding); + if (!ppiov) + return NULL; + + ppiov->pp = pool; + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, (struct page *)ppiov, + pool->pages_state_hold_cnt); + return (struct page *)((unsigned long)ppiov | PP_DEVMEM); +} + +static void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + netdev_devmem_binding_put(binding); +} + +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + struct page *page) +{ + struct page_pool_iov *ppiov; + + if (WARN_ON_ONCE(!page_is_page_pool_iov(page))) + return false; + + ppiov = page_to_page_pool_iov(page); + page_pool_iov_put_many(ppiov, 1); + /* We don't want the page pool put_page()ing our page_pool_iovs. */ + return false; +} + +const struct pp_memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_pages = mp_dmabuf_devmem_alloc_pages, + .release_page = mp_dmabuf_devmem_release_page, +}; +EXPORT_SYMBOL(dmabuf_devmem_ops); -- 2.42.0.869.gea05f2083d-goog
next prev parent reply other threads:[~2023-11-06 2:45 UTC|newest] Thread overview: 254+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-11-06 2:43 [RFC PATCH v3 00/12] Device Memory TCP Mina Almasry 2023-11-06 2:43 ` Mina Almasry 2023-11-06 2:44 ` [RFC PATCH v3 01/12] net: page_pool: factor out releasing DMA from releasing the page Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-06 2:44 ` [RFC PATCH v3 02/12] net: page_pool: create hooks for custom page providers Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-07 7:44 ` Yunsheng Lin 2023-11-07 7:44 ` Yunsheng Lin 2023-11-09 11:09 ` Paolo Abeni 2023-11-09 11:09 ` Paolo Abeni 2023-11-10 23:19 ` Jakub Kicinski 2023-11-10 23:19 ` Jakub Kicinski 2023-11-13 3:28 ` Mina Almasry 2023-11-13 3:28 ` Mina Almasry 2023-11-13 22:10 ` Jakub Kicinski 2023-11-13 22:10 ` Jakub Kicinski 2023-11-06 2:44 ` [RFC PATCH v3 03/12] net: netdev netlink api to bind dma-buf to a net device Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-10 23:16 ` Jakub Kicinski 2023-11-10 23:16 ` Jakub Kicinski 2023-11-06 2:44 ` [RFC PATCH v3 04/12] netdev: support binding dma-buf to netdevice Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-07 7:46 ` Yunsheng Lin 2023-11-07 7:46 ` Yunsheng Lin 2023-11-07 21:59 ` Mina Almasry 2023-11-07 21:59 ` Mina Almasry 2023-11-08 3:40 ` Yunsheng Lin 2023-11-08 3:40 ` Yunsheng Lin 2023-11-09 2:22 ` Mina Almasry 2023-11-09 2:22 ` Mina Almasry 2023-11-09 9:29 ` Yunsheng Lin 2023-11-09 9:29 ` Yunsheng Lin 2023-11-08 23:47 ` David Wei 2023-11-08 23:47 ` David Wei 2023-11-09 2:25 ` Mina Almasry 2023-11-09 2:25 ` Mina Almasry 2023-11-09 8:29 ` Paolo Abeni 2023-11-09 8:29 ` Paolo Abeni 2023-11-10 2:59 ` Mina Almasry 2023-11-10 2:59 ` Mina Almasry 2023-11-10 7:38 ` Yunsheng Lin 2023-11-10 7:38 ` Yunsheng Lin 2023-11-10 9:45 ` Mina Almasry 2023-11-10 9:45 ` Mina Almasry 2023-11-10 23:19 ` Jakub Kicinski 2023-11-10 23:19 ` Jakub Kicinski 2023-11-11 2:19 ` Mina Almasry 2023-11-11 2:19 ` Mina Almasry 2023-11-06 2:44 ` [RFC PATCH v3 05/12] netdev: netdevice devmem allocator Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-06 23:44 ` David Ahern 2023-11-06 23:44 ` David Ahern 2023-11-07 22:10 ` Mina Almasry 2023-11-07 22:10 ` Mina Almasry 2023-11-07 22:55 ` David Ahern 2023-11-07 22:55 ` David Ahern 2023-11-07 23:03 ` Mina Almasry 2023-11-07 23:03 ` Mina Almasry 2023-11-09 1:15 ` David Wei 2023-11-09 1:15 ` David Wei 2023-11-10 14:26 ` Pavel Begunkov 2023-11-10 14:26 ` Pavel Begunkov 2023-11-11 17:19 ` David Ahern 2023-11-11 17:19 ` David Ahern 2023-11-14 16:09 ` Pavel Begunkov 2023-11-14 16:09 ` Pavel Begunkov 2023-11-09 1:00 ` David Wei 2023-11-09 1:00 ` David Wei 2023-11-08 3:48 ` Yunsheng Lin 2023-11-08 3:48 ` Yunsheng Lin 2023-11-09 1:41 ` Mina Almasry 2023-11-09 1:41 ` Mina Almasry 2023-11-07 7:45 ` Yunsheng Lin 2023-11-07 7:45 ` Yunsheng Lin 2023-11-09 8:44 ` Paolo Abeni 2023-11-09 8:44 ` Paolo Abeni 2023-11-06 2:44 ` Mina Almasry [this message] 2023-11-06 2:44 ` [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider Mina Almasry 2023-11-06 21:02 ` Stanislav Fomichev 2023-11-06 21:02 ` Stanislav Fomichev 2023-11-06 23:49 ` David Ahern 2023-11-06 23:49 ` David Ahern 2023-11-08 0:02 ` Mina Almasry 2023-11-08 0:02 ` Mina Almasry 2023-11-08 0:10 ` David Ahern 2023-11-08 0:10 ` David Ahern 2023-11-10 23:16 ` Jakub Kicinski 2023-11-10 23:16 ` Jakub Kicinski 2023-11-13 4:54 ` Mina Almasry 2023-11-13 4:54 ` Mina Almasry 2023-11-06 2:44 ` [RFC PATCH v3 07/12] page-pool: device memory support Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-07 8:00 ` Yunsheng Lin 2023-11-07 8:00 ` Yunsheng Lin 2023-11-07 21:56 ` Mina Almasry 2023-11-07 21:56 ` Mina Almasry 2023-11-08 10:56 ` Yunsheng Lin 2023-11-08 10:56 ` Yunsheng Lin 2023-11-09 3:20 ` Mina Almasry 2023-11-09 3:20 ` Mina Almasry 2023-11-09 9:30 ` Yunsheng Lin 2023-11-09 9:30 ` Yunsheng Lin 2023-11-09 12:20 ` Mina Almasry 2023-11-09 12:20 ` Mina Almasry 2023-11-09 13:23 ` Yunsheng Lin 2023-11-09 13:23 ` Yunsheng Lin 2023-11-09 14:23 ` Christian König 2023-11-09 9:01 ` Paolo Abeni 2023-11-09 9:01 ` Paolo Abeni 2023-11-06 2:44 ` [RFC PATCH v3 08/12] net: support non paged skb frags Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-07 9:00 ` Yunsheng Lin 2023-11-07 9:00 ` Yunsheng Lin 2023-11-07 21:19 ` Mina Almasry 2023-11-07 21:19 ` Mina Almasry 2023-11-08 11:25 ` Yunsheng Lin 2023-11-08 11:25 ` Yunsheng Lin 2023-11-09 9:14 ` Paolo Abeni 2023-11-09 9:14 ` Paolo Abeni 2023-11-10 4:06 ` Mina Almasry 2023-11-10 4:06 ` Mina Almasry 2023-11-10 23:19 ` Jakub Kicinski 2023-11-10 23:19 ` Jakub Kicinski 2023-11-13 6:05 ` Mina Almasry 2023-11-13 6:05 ` Mina Almasry 2023-11-13 22:17 ` Jakub Kicinski 2023-11-13 22:17 ` Jakub Kicinski 2023-11-06 2:44 ` [RFC PATCH v3 09/12] net: add support for skbs with unreadable frags Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-06 18:47 ` Stanislav Fomichev 2023-11-06 18:47 ` Stanislav Fomichev 2023-11-06 19:34 ` David Ahern 2023-11-06 19:34 ` David Ahern 2023-11-06 20:31 ` Mina Almasry 2023-11-06 20:31 ` Mina Almasry 2023-11-06 21:59 ` Stanislav Fomichev 2023-11-06 21:59 ` Stanislav Fomichev 2023-11-06 22:18 ` Mina Almasry 2023-11-06 22:18 ` Mina Almasry 2023-11-06 22:59 ` Stanislav Fomichev 2023-11-06 22:59 ` Stanislav Fomichev 2023-11-06 23:14 ` Kaiyuan Zhang 2023-11-06 23:27 ` Mina Almasry 2023-11-06 23:27 ` Mina Almasry 2023-11-06 23:55 ` Stanislav Fomichev 2023-11-06 23:55 ` Stanislav Fomichev 2023-11-07 0:07 ` Willem de Bruijn 2023-11-07 0:07 ` Willem de Bruijn 2023-11-07 0:14 ` Stanislav Fomichev 2023-11-07 0:14 ` Stanislav Fomichev 2023-11-07 0:59 ` Stanislav Fomichev 2023-11-07 0:59 ` Stanislav Fomichev 2023-11-07 2:23 ` Willem de Bruijn 2023-11-07 2:23 ` Willem de Bruijn 2023-11-07 17:44 ` Stanislav Fomichev 2023-11-07 17:44 ` Stanislav Fomichev 2023-11-07 17:57 ` Willem de Bruijn 2023-11-07 17:57 ` Willem de Bruijn 2023-11-07 18:14 ` Stanislav Fomichev 2023-11-07 18:14 ` Stanislav Fomichev 2023-11-07 0:20 ` Mina Almasry 2023-11-07 0:20 ` Mina Almasry 2023-11-07 1:06 ` Stanislav Fomichev 2023-11-07 1:06 ` Stanislav Fomichev 2023-11-07 19:53 ` Mina Almasry 2023-11-07 19:53 ` Mina Almasry 2023-11-07 21:05 ` Stanislav Fomichev 2023-11-07 21:05 ` Stanislav Fomichev 2023-11-07 21:17 ` Eric Dumazet 2023-11-07 21:17 ` Eric Dumazet 2023-11-07 22:23 ` Stanislav Fomichev 2023-11-07 22:23 ` Stanislav Fomichev 2023-11-10 23:17 ` Jakub Kicinski 2023-11-10 23:17 ` Jakub Kicinski 2023-11-10 23:19 ` Jakub Kicinski 2023-11-10 23:19 ` Jakub Kicinski 2023-11-07 1:09 ` David Ahern 2023-11-07 1:09 ` David Ahern 2023-11-06 23:37 ` David Ahern 2023-11-06 23:37 ` David Ahern 2023-11-07 0:03 ` Mina Almasry 2023-11-07 0:03 ` Mina Almasry 2023-11-06 20:56 ` Stanislav Fomichev 2023-11-06 20:56 ` Stanislav Fomichev 2023-11-07 0:16 ` David Ahern 2023-11-07 0:16 ` David Ahern 2023-11-07 0:23 ` Mina Almasry 2023-11-07 0:23 ` Mina Almasry 2023-11-08 14:43 ` David Laight 2023-11-08 14:43 ` David Laight 2023-11-06 2:44 ` [RFC PATCH v3 10/12] tcp: RX path for devmem TCP Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-06 18:44 ` Stanislav Fomichev 2023-11-06 18:44 ` Stanislav Fomichev 2023-11-06 19:29 ` Mina Almasry 2023-11-06 19:29 ` Mina Almasry 2023-11-06 21:14 ` Willem de Bruijn 2023-11-06 21:14 ` Willem de Bruijn 2023-11-06 22:34 ` Stanislav Fomichev 2023-11-06 22:34 ` Stanislav Fomichev 2023-11-06 22:55 ` Willem de Bruijn 2023-11-06 22:55 ` Willem de Bruijn 2023-11-06 23:32 ` Stanislav Fomichev 2023-11-06 23:32 ` Stanislav Fomichev 2023-11-06 23:55 ` David Ahern 2023-11-06 23:55 ` David Ahern 2023-11-07 0:02 ` Willem de Bruijn 2023-11-07 0:02 ` Willem de Bruijn 2023-11-07 23:55 ` Mina Almasry 2023-11-07 23:55 ` Mina Almasry 2023-11-08 0:01 ` David Ahern 2023-11-08 0:01 ` David Ahern 2023-11-09 2:39 ` Mina Almasry 2023-11-09 2:39 ` Mina Almasry 2023-11-09 16:07 ` Edward Cree 2023-11-09 16:07 ` Edward Cree 2023-12-08 20:12 ` Pavel Begunkov 2023-12-08 20:12 ` Pavel Begunkov 2023-11-09 11:05 ` Paolo Abeni 2023-11-09 11:05 ` Paolo Abeni 2023-11-10 23:16 ` Jakub Kicinski 2023-11-10 23:16 ` Jakub Kicinski 2023-12-08 20:28 ` Pavel Begunkov 2023-12-08 20:28 ` Pavel Begunkov 2023-12-08 20:09 ` Pavel Begunkov 2023-12-08 20:09 ` Pavel Begunkov 2023-11-06 21:17 ` Stanislav Fomichev 2023-11-06 21:17 ` Stanislav Fomichev 2023-11-08 15:36 ` Edward Cree 2023-11-08 15:36 ` Edward Cree 2023-11-09 10:52 ` Paolo Abeni 2023-11-09 10:52 ` Paolo Abeni 2023-11-10 23:19 ` Jakub Kicinski 2023-11-10 23:19 ` Jakub Kicinski 2023-11-06 2:44 ` [RFC PATCH v3 11/12] net: add SO_DEVMEM_DONTNEED setsockopt to release RX pages Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-06 2:44 ` [RFC PATCH v3 12/12] selftests: add ncdevmem, netcat for devmem TCP Mina Almasry 2023-11-06 2:44 ` Mina Almasry 2023-11-09 11:03 ` Paolo Abeni 2023-11-09 11:03 ` Paolo Abeni 2023-11-10 23:13 ` Jakub Kicinski 2023-11-10 23:13 ` Jakub Kicinski 2023-11-11 2:27 ` Mina Almasry 2023-11-11 2:27 ` Mina Almasry 2023-11-11 2:35 ` Jakub Kicinski 2023-11-11 2:35 ` Jakub Kicinski 2023-11-13 4:08 ` Mina Almasry 2023-11-13 4:08 ` Mina Almasry 2023-11-13 22:20 ` Jakub Kicinski 2023-11-13 22:20 ` Jakub Kicinski 2023-11-10 23:17 ` Jakub Kicinski 2023-11-10 23:17 ` Jakub Kicinski 2023-11-07 15:18 ` [RFC PATCH v3 00/12] Device Memory TCP David Ahern 2023-11-07 15:18 ` David Ahern
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20231106024413.2801438-7-almasrymina@google.com \ --to=almasrymina@google.com \ --cc=arnd@arndb.de \ --cc=christian.koenig@amd.com \ --cc=davem@davemloft.net \ --cc=dri-devel@lists.freedesktop.org \ --cc=dsahern@kernel.org \ --cc=edumazet@google.com \ --cc=hawk@kernel.org \ --cc=ilias.apalodimas@linaro.org \ --cc=jeroendb@google.com \ --cc=kaiyuanz@google.com \ --cc=kuba@kernel.org \ --cc=linaro-mm-sig@lists.linaro.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-media@vger.kernel.org \ --cc=netdev@vger.kernel.org \ --cc=pabeni@redhat.com \ --cc=pkaligineedi@google.com \ --cc=shakeelb@google.com \ --cc=shuah@kernel.org \ --cc=sumit.semwal@linaro.org \ --cc=willemb@google.com \ --cc=willemdebruijn.kernel@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.