From: Mina Almasry <almasrymina@google.com> To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, Martin KaFai Lau <martin.lau@linux.dev>, Song Liu <song@kernel.org>, Yonghong Song <yonghong.song@linux.dev>, John Fastabend <john.fastabend@gmail.com>, KP Singh <kpsingh@kernel.org>, Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>, Jiri Olsa <jolsa@kernel.org>, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: "Mina Almasry" <almasrymina@google.com>, "David S. Miller" <davem@davemloft.net>, "Eric Dumazet" <edumazet@google.com>, "Jakub Kicinski" <kuba@kernel.org>, "Paolo Abeni" <pabeni@redhat.com>, "Jonathan Corbet" <corbet@lwn.net>, "Richard Henderson" <richard.henderson@linaro.org>, "Ivan Kokshaysky" <ink@jurassic.park.msu.ru>, "Matt Turner" <mattst88@gmail.com>, "Thomas Bogendoerfer" <tsbogend@alpha.franken.de>, "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, "Helge Deller" <deller@gmx.de>, "Jesper Dangaard Brouer" <hawk@kernel.org>, "Ilias Apalodimas" <ilias.apalodimas@linaro.org>, "Steven Rostedt" <rostedt@goodmis.org>, "Masami Hiramatsu" <mhiramat@kernel.org>, "Arnd Bergmann" <arnd@arndb.de>, "Alexei Starovoitov" <ast@kernel.org>, "Daniel Borkmann" <daniel@iogearbox.net>, "Andrii Nakryiko" <andrii@kernel.org>, "David Ahern" <dsahern@kernel.org>, "Willem de Bruijn" <willemdebruijn.kernel@gmail.com>, "Shuah Khan" <shuah@kernel.org>, "Sumit Semwal" <sumit.semwal@linaro.org>, "Christian König" <christian.koenig@amd.com>, "Pavel Begunkov" <asml.silence@gmail.com>, "David Wei" <dw@davidwei.uk>, "Jason Gunthorpe" <jgg@ziepe.ca>, "Yunsheng Lin" <linyunsheng@huawei.com>, "Shailend Chand" <shailend@google.com>, "Harshitha Ramamurthy" <hramamurthy@google.com>, "Shakeel Butt" <shakeelb@google.com>, "Jeroen de Borst" <jeroendb@google.com>, "Praveen Kaligineedi" <pkaligineedi@google.com>, "Willem de Bruijn" <willemb@google.com>, "Kaiyuan Zhang" <kaiyuanz@google.com> Subject: [RFC PATCH net-next v5 08/14] memory-provider: dmabuf devmem memory provider Date: Sun, 17 Dec 2023 18:40:15 -0800 [thread overview] Message-ID: <20231218024024.3516870-9-almasrymina@google.com> (raw) In-Reply-To: <20231218024024.3516870-1-almasrymina@google.com> Implement a memory provider that allocates dmabuf devmem in the form of net_iov. The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the net_iov. The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed. Usage of PP_FLAG_DMA_MAP is required for this memory provide such that the page_pool can provide the driver with the dma-addrs of the devmem. Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity & p.order != 0. Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com> Signed-off-by: Mina Almasry <almasrymina@google.com> --- v2: - Disable devmem for p.order != 0 v1: - static_branch check in page_is_page_pool_iov() (Willem & Paolo). - PP_DEVMEM -> PP_IOV (David). - Require PP_FLAG_DMA_MAP (Jakub). memory provider --- include/net/netmem.h | 14 ++++++ include/net/page_pool/types.h | 2 + net/core/page_pool.c | 93 +++++++++++++++++++++++++++++++++++ 3 files changed, 109 insertions(+) diff --git a/include/net/netmem.h b/include/net/netmem.h index 7557aecc0f78..ab3824b7b789 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -97,6 +97,20 @@ static inline bool netmem_is_net_iov(const struct netmem *netmem) #endif } +static inline struct net_iov *netmem_to_net_iov(struct netmem *netmem) +{ + if (netmem_is_net_iov(netmem)) + return (struct net_iov *)((unsigned long)netmem & ~NET_IOV); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + +static inline struct netmem *net_iov_to_netmem(struct net_iov *niov) +{ + return (struct netmem *)((unsigned long)niov | NET_IOV); +} + static inline struct page *netmem_to_page(struct netmem *netmem) { if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 433ae9ae658b..3ddef7d7ba74 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -134,6 +134,8 @@ struct memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct netmem *netmem); }; +extern const struct memory_provider_ops dmabuf_devmem_ops; + struct page_pool { struct page_pool_params_fast p; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 173158a3dd61..231840112956 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -12,6 +12,7 @@ #include <net/page_pool/helpers.h> #include <net/xdp.h> +#include <net/netdev_rx_queue.h> #include <linux/dma-direction.h> #include <linux/dma-mapping.h> @@ -20,12 +21,15 @@ #include <linux/poison.h> #include <linux/ethtool.h> #include <linux/netdevice.h> +#include <linux/genalloc.h> +#include <net/devmem.h> #include <trace/events/page_pool.h> #include "page_pool_priv.h" DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); +EXPORT_SYMBOL(page_pool_mem_providers); #define DEFER_TIME (msecs_to_jiffies(1000)) #define DEFER_WARN_INTERVAL (60 * HZ) @@ -175,6 +179,7 @@ static void page_pool_producer_unlock(struct page_pool *pool, static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { + struct netdev_dmabuf_binding *binding = NULL; unsigned int ring_qsize = 1024; /* Default */ int err; @@ -237,6 +242,14 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); + if (pool->p.queue) + binding = READ_ONCE(pool->p.queue->binding); + + if (binding) { + pool->mp_ops = &dmabuf_devmem_ops; + pool->mp_priv = binding; + } + if (pool->mp_ops) { err = pool->mp_ops->init(pool); if (err) { @@ -1055,3 +1068,83 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +/*** "Dmabuf devmem memory provider" ***/ + +static int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + return -EOPNOTSUPP; + + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + return -EOPNOTSUPP; + + if (pool->p.order != 0) + return -E2BIG; + + netdev_dmabuf_binding_get(binding); + return 0; +} + +static struct netmem *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, + gfp_t gfp) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + struct netmem *netmem; + struct net_iov *niov; + dma_addr_t dma_addr; + + niov = netdev_alloc_dmabuf(binding); + if (!niov) + return NULL; + + dma_addr = net_iov_dma_addr(niov); + + netmem = net_iov_to_netmem(niov); + + page_pool_set_pp_info(pool, netmem); + + if (page_pool_set_dma_addr_netmem(netmem, dma_addr)) + goto err_free; + + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt); + return netmem; + +err_free: + netdev_free_dmabuf(niov); + return NULL; +} + +static void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + netdev_dmabuf_binding_put(binding); +} + +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + struct netmem *netmem) +{ + WARN_ON_ONCE(!netmem_is_net_iov(netmem)); + + page_pool_clear_pp_info(netmem); + + netdev_free_dmabuf(netmem_to_net_iov(netmem)); + + /* We don't want the page pool put_page()ing our net_iovs. */ + return false; +} + +const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_pages = mp_dmabuf_devmem_alloc_pages, + .release_page = mp_dmabuf_devmem_release_page, +}; +EXPORT_SYMBOL(dmabuf_devmem_ops); -- 2.43.0.472.g3155946c3a-goog
WARNING: multiple messages have this Message-ID (diff)
From: Mina Almasry <almasrymina@google.com> To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, Martin KaFai Lau <martin.lau@linux.dev>, Song Liu <song@kernel.org>, Yonghong Song <yonghong.song@linux.dev>, John Fastabend <john.fastabend@gmail.com>, KP Singh <kpsingh@kernel.org>, Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>, Jiri Olsa <jolsa@kernel.org>, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: "Kaiyuan Zhang" <kaiyuanz@google.com>, "Pavel Begunkov" <asml.silence@gmail.com>, "Alexei Starovoitov" <ast@kernel.org>, "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>, "Eric Dumazet" <edumazet@google.com>, "Shuah Khan" <shuah@kernel.org>, "Sumit Semwal" <sumit.semwal@linaro.org>, "Mina Almasry" <almasrymina@google.com>, "Willem de Bruijn" <willemdebruijn.kernel@gmail.com>, "Jeroen de Borst" <jeroendb@google.com>, "Daniel Borkmann" <daniel@iogearbox.net>, "Jonathan Corbet" <corbet@lwn.net>, "Helge Deller" <deller@gmx.de>, "Andrii Nakryiko" <andrii@kernel.org>, "Jason Gunthorpe" <jgg@ziepe.ca>, "Jakub Kicinski" <kuba@kernel.org>, "Matt Turner" <mattst88@gmail.com>, "Paolo Abeni" <pabeni@redhat.com>, "Jesper Dangaard Brouer" <hawk@kernel.org>, "Arnd Bergmann" <arnd@arndb.de>, "Richard Henderson" <richard.henderson@linaro.org>, "Steven Rostedt" <rostedt@goodmis.org>, "Shailend Chand" <shailend@google.com>, "Ivan Kokshaysky" <ink@jurassic.park.msu.ru>, "Harshitha Ramamurthy" <hramamurthy@google.com>, "Praveen Kaligineedi" <pkaligineedi@google.com>, "Willem de Bruijn" <willemb@google.com>, "Thomas Bogendoerfer" <tsbogend@alpha.franken.de>, "David Ahern" <dsahern@kernel.org>, "Ilias Apalodimas" <ilias.apalodimas@linaro.org>, "David Wei" <dw@davidwei.uk>, "Christian König" <christian.koenig@amd.com>, "Yunsheng Lin" <linyunsheng@huawei.com>, "Masami Hiramatsu" <mhiramat@kernel.org>, "Shakeel Butt" <shakeelb@google.com>, "David S. Miller" <davem@davemloft.net> Subject: [RFC PATCH net-next v5 08/14] memory-provider: dmabuf devmem memory provider Date: Sun, 17 Dec 2023 18:40:15 -0800 [thread overview] Message-ID: <20231218024024.3516870-9-almasrymina@google.com> (raw) In-Reply-To: <20231218024024.3516870-1-almasrymina@google.com> Implement a memory provider that allocates dmabuf devmem in the form of net_iov. The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the net_iov. The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed. Usage of PP_FLAG_DMA_MAP is required for this memory provide such that the page_pool can provide the driver with the dma-addrs of the devmem. Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity & p.order != 0. Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com> Signed-off-by: Mina Almasry <almasrymina@google.com> --- v2: - Disable devmem for p.order != 0 v1: - static_branch check in page_is_page_pool_iov() (Willem & Paolo). - PP_DEVMEM -> PP_IOV (David). - Require PP_FLAG_DMA_MAP (Jakub). memory provider --- include/net/netmem.h | 14 ++++++ include/net/page_pool/types.h | 2 + net/core/page_pool.c | 93 +++++++++++++++++++++++++++++++++++ 3 files changed, 109 insertions(+) diff --git a/include/net/netmem.h b/include/net/netmem.h index 7557aecc0f78..ab3824b7b789 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -97,6 +97,20 @@ static inline bool netmem_is_net_iov(const struct netmem *netmem) #endif } +static inline struct net_iov *netmem_to_net_iov(struct netmem *netmem) +{ + if (netmem_is_net_iov(netmem)) + return (struct net_iov *)((unsigned long)netmem & ~NET_IOV); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + +static inline struct netmem *net_iov_to_netmem(struct net_iov *niov) +{ + return (struct netmem *)((unsigned long)niov | NET_IOV); +} + static inline struct page *netmem_to_page(struct netmem *netmem) { if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 433ae9ae658b..3ddef7d7ba74 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -134,6 +134,8 @@ struct memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct netmem *netmem); }; +extern const struct memory_provider_ops dmabuf_devmem_ops; + struct page_pool { struct page_pool_params_fast p; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 173158a3dd61..231840112956 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -12,6 +12,7 @@ #include <net/page_pool/helpers.h> #include <net/xdp.h> +#include <net/netdev_rx_queue.h> #include <linux/dma-direction.h> #include <linux/dma-mapping.h> @@ -20,12 +21,15 @@ #include <linux/poison.h> #include <linux/ethtool.h> #include <linux/netdevice.h> +#include <linux/genalloc.h> +#include <net/devmem.h> #include <trace/events/page_pool.h> #include "page_pool_priv.h" DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); +EXPORT_SYMBOL(page_pool_mem_providers); #define DEFER_TIME (msecs_to_jiffies(1000)) #define DEFER_WARN_INTERVAL (60 * HZ) @@ -175,6 +179,7 @@ static void page_pool_producer_unlock(struct page_pool *pool, static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { + struct netdev_dmabuf_binding *binding = NULL; unsigned int ring_qsize = 1024; /* Default */ int err; @@ -237,6 +242,14 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); + if (pool->p.queue) + binding = READ_ONCE(pool->p.queue->binding); + + if (binding) { + pool->mp_ops = &dmabuf_devmem_ops; + pool->mp_priv = binding; + } + if (pool->mp_ops) { err = pool->mp_ops->init(pool); if (err) { @@ -1055,3 +1068,83 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +/*** "Dmabuf devmem memory provider" ***/ + +static int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + return -EOPNOTSUPP; + + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + return -EOPNOTSUPP; + + if (pool->p.order != 0) + return -E2BIG; + + netdev_dmabuf_binding_get(binding); + return 0; +} + +static struct netmem *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, + gfp_t gfp) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + struct netmem *netmem; + struct net_iov *niov; + dma_addr_t dma_addr; + + niov = netdev_alloc_dmabuf(binding); + if (!niov) + return NULL; + + dma_addr = net_iov_dma_addr(niov); + + netmem = net_iov_to_netmem(niov); + + page_pool_set_pp_info(pool, netmem); + + if (page_pool_set_dma_addr_netmem(netmem, dma_addr)) + goto err_free; + + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt); + return netmem; + +err_free: + netdev_free_dmabuf(niov); + return NULL; +} + +static void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + netdev_dmabuf_binding_put(binding); +} + +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + struct netmem *netmem) +{ + WARN_ON_ONCE(!netmem_is_net_iov(netmem)); + + page_pool_clear_pp_info(netmem); + + netdev_free_dmabuf(netmem_to_net_iov(netmem)); + + /* We don't want the page pool put_page()ing our net_iovs. */ + return false; +} + +const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_pages = mp_dmabuf_devmem_alloc_pages, + .release_page = mp_dmabuf_devmem_release_page, +}; +EXPORT_SYMBOL(dmabuf_devmem_ops); -- 2.43.0.472.g3155946c3a-goog
next prev parent reply other threads:[~2023-12-18 2:40 UTC|newest] Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-12-18 2:40 [RFC PATCH net-next v5 00/14] Device Memory TCP Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 01/14] net: page_pool: create hooks for custom page providers Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 02/14] net: page_pool: factor out page_pool recycle check Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 03/14] net: netdev netlink api to bind dma-buf to a net device Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 04/14] netdev: support binding dma-buf to netdevice Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 05/14] netdev: netdevice devmem allocator Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2024-02-13 13:15 ` Pavel Begunkov 2024-02-13 20:01 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 06/14] page_pool: convert to use netmem Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 07/14] page_pool: devmem support Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2024-02-13 13:18 ` Pavel Begunkov 2024-02-13 21:11 ` Mina Almasry 2024-02-14 15:30 ` Pavel Begunkov 2023-12-18 2:40 ` Mina Almasry [this message] 2023-12-18 2:40 ` [RFC PATCH net-next v5 08/14] memory-provider: dmabuf devmem memory provider Mina Almasry 2024-02-13 13:19 ` Pavel Begunkov 2023-12-18 2:40 ` [RFC PATCH net-next v5 09/14] net: support non paged skb frags Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 10/14] net: add support for skbs with unreadable frags Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 11/14] tcp: RX path for devmem TCP Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 12/14] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-28 8:46 ` Helge Deller 2023-12-18 2:40 ` [RFC PATCH net-next v5 13/14] net: add devmem TCP documentation Mina Almasry 2023-12-18 2:40 ` Mina Almasry 2023-12-18 2:40 ` [RFC PATCH net-next v5 14/14] selftests: add ncdevmem, netcat for devmem TCP Mina Almasry 2023-12-18 2:40 ` Mina Almasry
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20231218024024.3516870-9-almasrymina@google.com \ --to=almasrymina@google.com \ --cc=James.Bottomley@HansenPartnership.com \ --cc=andrii@kernel.org \ --cc=arnd@arndb.de \ --cc=asml.silence@gmail.com \ --cc=ast@kernel.org \ --cc=bpf@vger.kernel.org \ --cc=christian.koenig@amd.com \ --cc=corbet@lwn.net \ --cc=daniel@iogearbox.net \ --cc=davem@davemloft.net \ --cc=deller@gmx.de \ --cc=dri-devel@lists.freedesktop.org \ --cc=dsahern@kernel.org \ --cc=dw@davidwei.uk \ --cc=edumazet@google.com \ --cc=haoluo@google.com \ --cc=hawk@kernel.org \ --cc=hramamurthy@google.com \ --cc=ilias.apalodimas@linaro.org \ --cc=ink@jurassic.park.msu.ru \ --cc=jeroendb@google.com \ --cc=jgg@ziepe.ca \ --cc=john.fastabend@gmail.com \ --cc=jolsa@kernel.org \ --cc=kaiyuanz@google.com \ --cc=kpsingh@kernel.org \ --cc=kuba@kernel.org \ --cc=linux-alpha@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-media@vger.kernel.org \ --cc=linux-mips@vger.kernel.org \ --cc=linux-parisc@vger.kernel.org \ --cc=linux-trace-kernel@vger.kernel.org \ --cc=linyunsheng@huawei.com \ --cc=martin.lau@linux.dev \ --cc=mathieu.desnoyers@efficios.com \ --cc=mattst88@gmail.com \ --cc=mhiramat@kernel.org \ --cc=netdev@vger.kernel.org \ --cc=pabeni@redhat.com \ --cc=pkaligineedi@google.com \ --cc=richard.henderson@linaro.org \ --cc=rostedt@goodmis.org \ --cc=sdf@google.com \ --cc=shailend@google.com \ --cc=shakeelb@google.com \ --cc=shuah@kernel.org \ --cc=song@kernel.org \ --cc=sparclinux@vger.kernel.org \ --cc=sumit.semwal@linaro.org \ --cc=tsbogend@alpha.franken.de \ --cc=willemb@google.com \ --cc=willemdebruijn.kernel@gmail.com \ --cc=yonghong.song@linux.dev \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.