From: Yinjun Zhang <yinjun.zhang@corigine.com> To: almasrymina@google.com Cc: arnd@arndb.de, bpf@vger.kernel.org, christian.koenig@amd.com, corbet@lwn.net, davem@davemloft.net, dri-devel@lists.freedesktop.org, dsahern@kernel.org, edumazet@google.com, hawk@kernel.org, hramamurthy@google.com, ilias.apalodimas@linaro.org, jeroendb@google.com, kaiyuanz@google.com, kuba@kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linyunsheng@huawei.com, netdev@vger.kernel.org, pabeni@redhat.com, pkaligineedi@google.com, shailend@google.com, shakeelb@google.com, shuah@kernel.org, sumit.semwal@linaro.org, willemb@google.com, willemdebruijn.kernel@gmail.com Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider Date: Wed, 13 Dec 2023 15:49:12 +0800 [thread overview] Message-ID: <20231213074912.1275130-1-yinjun.zhang@corigine.com> (raw) In-Reply-To: <20231208005250.2910004-9-almasrymina@google.com> On Thu, 7 Dec 2023 16:52:39 -0800, Mina Almasry wrote: <...> > +static int mp_dmabuf_devmem_init(struct page_pool *pool) > +{ > + struct netdev_dmabuf_binding *binding = pool->mp_priv; > + > + if (!binding) > + return -EINVAL; > + > + if (!(pool->p.flags & PP_FLAG_DMA_MAP)) > + return -EOPNOTSUPP; > + > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > + return -EOPNOTSUPP; > + > + netdev_dmabuf_binding_get(binding); > + return 0; > +} > + > +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, > + gfp_t gfp) > +{ > + struct netdev_dmabuf_binding *binding = pool->mp_priv; > + struct page_pool_iov *ppiov; > + > + ppiov = netdev_alloc_dmabuf(binding); Since it only supports one-page allocation, we'd better add a check in `ops->init()` that `pool->p.order` must be 0. > + if (!ppiov) > + return NULL; > + > + ppiov->pp = pool; > + pool->pages_state_hold_cnt++; > + trace_page_pool_state_hold(pool, (struct page *)ppiov, > + pool->pages_state_hold_cnt); > + return (struct page *)((unsigned long)ppiov | PP_IOV); > +} <...>
WARNING: multiple messages have this Message-ID (diff)
From: Yinjun Zhang <yinjun.zhang@corigine.com> To: almasrymina@google.com Cc: linux-doc@vger.kernel.org, kaiyuanz@google.com, dri-devel@lists.freedesktop.org, edumazet@google.com, linux-kselftest@vger.kernel.org, shuah@kernel.org, sumit.semwal@linaro.org, linux-arch@vger.kernel.org, willemdebruijn.kernel@gmail.com, jeroendb@google.com, corbet@lwn.net, kuba@kernel.org, pabeni@redhat.com, linux-media@vger.kernel.org, hawk@kernel.org, arnd@arndb.de, shailend@google.com, shakeelb@google.com, hramamurthy@google.com, willemb@google.com, netdev@vger.kernel.org, dsahern@kernel.org, ilias.apalodimas@linaro.org, linux-kernel@vger.kernel.org, davem@davemloft.net, linyunsheng@huawei.com, pkaligineedi@google.com, bpf@vger.kernel.org, christian.koenig@amd.com Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider Date: Wed, 13 Dec 2023 15:49:12 +0800 [thread overview] Message-ID: <20231213074912.1275130-1-yinjun.zhang@corigine.com> (raw) In-Reply-To: <20231208005250.2910004-9-almasrymina@google.com> On Thu, 7 Dec 2023 16:52:39 -0800, Mina Almasry wrote: <...> > +static int mp_dmabuf_devmem_init(struct page_pool *pool) > +{ > + struct netdev_dmabuf_binding *binding = pool->mp_priv; > + > + if (!binding) > + return -EINVAL; > + > + if (!(pool->p.flags & PP_FLAG_DMA_MAP)) > + return -EOPNOTSUPP; > + > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > + return -EOPNOTSUPP; > + > + netdev_dmabuf_binding_get(binding); > + return 0; > +} > + > +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, > + gfp_t gfp) > +{ > + struct netdev_dmabuf_binding *binding = pool->mp_priv; > + struct page_pool_iov *ppiov; > + > + ppiov = netdev_alloc_dmabuf(binding); Since it only supports one-page allocation, we'd better add a check in `ops->init()` that `pool->p.order` must be 0. > + if (!ppiov) > + return NULL; > + > + ppiov->pp = pool; > + pool->pages_state_hold_cnt++; > + trace_page_pool_state_hold(pool, (struct page *)ppiov, > + pool->pages_state_hold_cnt); > + return (struct page *)((unsigned long)ppiov | PP_IOV); > +} <...>
next prev parent reply other threads:[~2023-12-13 7:49 UTC|newest] Thread overview: 145+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-12-08 0:52 [net-next v1 00/16] Device Memory TCP Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 01/16] net: page_pool: factor out releasing DMA from releasing the page Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-10 3:49 ` Shakeel Butt 2023-12-10 3:49 ` Shakeel Butt 2023-12-12 8:11 ` Ilias Apalodimas 2023-12-12 8:11 ` Ilias Apalodimas 2023-12-08 0:52 ` [net-next v1 02/16] net: page_pool: create hooks for custom page providers Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-12 8:07 ` Ilias Apalodimas 2023-12-12 8:07 ` Ilias Apalodimas 2023-12-12 14:47 ` Mina Almasry 2023-12-12 14:47 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 03/16] queue_api: define queue api Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-14 1:15 ` Jakub Kicinski 2023-12-14 1:15 ` Jakub Kicinski 2023-12-08 0:52 ` [net-next v1 04/16] gve: implement " Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2024-03-05 11:45 ` Arnd Bergmann 2023-12-08 0:52 ` [net-next v1 05/16] net: netdev netlink api to bind dma-buf to a net device Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-14 1:17 ` Jakub Kicinski 2023-12-14 1:17 ` Jakub Kicinski 2023-12-08 0:52 ` [net-next v1 06/16] netdev: support binding dma-buf to netdevice Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 15:40 ` kernel test robot 2023-12-08 15:40 ` kernel test robot 2023-12-08 16:02 ` kernel test robot 2023-12-08 16:02 ` kernel test robot 2023-12-08 17:48 ` David Ahern 2023-12-08 17:48 ` David Ahern 2023-12-08 19:22 ` Mina Almasry 2023-12-08 19:22 ` Mina Almasry 2023-12-08 20:32 ` Mina Almasry 2023-12-08 20:32 ` Mina Almasry 2023-12-09 23:29 ` David Ahern 2023-12-09 23:29 ` David Ahern 2023-12-11 2:19 ` Mina Almasry 2023-12-11 2:19 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 07/16] netdev: netdevice devmem allocator Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 17:56 ` David Ahern 2023-12-08 17:56 ` David Ahern 2023-12-08 19:27 ` Mina Almasry 2023-12-08 19:27 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 08/16] memory-provider: dmabuf devmem memory provider Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 22:48 ` Pavel Begunkov 2023-12-08 22:48 ` Pavel Begunkov 2023-12-08 23:25 ` Mina Almasry 2023-12-08 23:25 ` Mina Almasry 2023-12-10 3:03 ` Pavel Begunkov 2023-12-10 3:03 ` Pavel Begunkov 2023-12-11 2:30 ` Mina Almasry 2023-12-11 2:30 ` Mina Almasry 2023-12-11 20:35 ` Pavel Begunkov 2023-12-11 20:35 ` Pavel Begunkov 2023-12-14 20:03 ` Mina Almasry 2023-12-14 20:03 ` Mina Almasry 2023-12-19 23:55 ` Pavel Begunkov 2023-12-19 23:55 ` Pavel Begunkov 2023-12-08 23:05 ` Pavel Begunkov 2023-12-08 23:05 ` Pavel Begunkov 2023-12-12 12:25 ` Jason Gunthorpe 2023-12-12 12:25 ` Jason Gunthorpe 2023-12-12 13:07 ` Christoph Hellwig 2023-12-12 14:26 ` Mina Almasry 2023-12-12 14:26 ` Mina Almasry 2023-12-12 14:39 ` Jason Gunthorpe 2023-12-12 14:39 ` Jason Gunthorpe 2023-12-12 14:58 ` Mina Almasry 2023-12-12 14:58 ` Mina Almasry 2023-12-12 15:08 ` Jason Gunthorpe 2023-12-12 15:08 ` Jason Gunthorpe 2023-12-13 1:09 ` Mina Almasry 2023-12-13 1:09 ` Mina Almasry 2023-12-13 2:19 ` David Ahern 2023-12-13 2:19 ` David Ahern 2023-12-13 7:49 ` Yinjun Zhang [this message] 2023-12-13 7:49 ` Yinjun Zhang 2023-12-08 0:52 ` [net-next v1 09/16] page_pool: device memory support Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 9:30 ` Yunsheng Lin 2023-12-08 9:30 ` Yunsheng Lin 2023-12-08 16:05 ` Mina Almasry 2023-12-08 16:05 ` Mina Almasry 2023-12-11 2:04 ` Yunsheng Lin 2023-12-11 2:04 ` Yunsheng Lin 2023-12-11 2:26 ` Mina Almasry 2023-12-11 2:26 ` Mina Almasry 2023-12-11 4:04 ` Mina Almasry 2023-12-11 4:04 ` Mina Almasry 2023-12-11 11:51 ` Yunsheng Lin 2023-12-11 11:51 ` Yunsheng Lin 2023-12-11 18:14 ` Mina Almasry 2023-12-11 18:14 ` Mina Almasry 2023-12-12 11:17 ` Yunsheng Lin 2023-12-12 11:17 ` Yunsheng Lin 2023-12-12 14:28 ` Mina Almasry 2023-12-12 14:28 ` Mina Almasry 2023-12-13 11:48 ` Yunsheng Lin 2023-12-13 11:48 ` Yunsheng Lin 2023-12-13 7:52 ` Mina Almasry 2023-12-13 7:52 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 10/16] page_pool: don't release iov on elevanted refcount Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 11/16] net: support non paged skb frags Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 12/16] net: add support for skbs with unreadable frags Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 13/16] tcp: RX path for devmem TCP Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 15:40 ` kernel test robot 2023-12-08 15:40 ` kernel test robot 2023-12-08 17:55 ` David Ahern 2023-12-08 17:55 ` David Ahern 2023-12-08 19:23 ` Mina Almasry 2023-12-08 19:23 ` Mina Almasry 2023-12-08 0:52 ` [net-next v1 14/16] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-12 19:08 ` Simon Horman 2023-12-12 19:08 ` Simon Horman 2023-12-08 0:52 ` [net-next v1 15/16] net: add devmem TCP documentation Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-12 19:14 ` Simon Horman 2023-12-12 19:14 ` Simon Horman 2023-12-08 0:52 ` [net-next v1 16/16] selftests: add ncdevmem, netcat for devmem TCP Mina Almasry 2023-12-08 0:52 ` Mina Almasry 2023-12-08 1:47 ` [net-next v1 00/16] Device Memory TCP Mina Almasry 2023-12-08 1:47 ` Mina Almasry 2023-12-08 17:57 ` David Ahern 2023-12-08 17:57 ` David Ahern 2023-12-08 19:31 ` Mina Almasry 2023-12-08 19:31 ` Mina Almasry 2023-12-10 3:48 ` Shakeel Butt 2023-12-10 3:48 ` Shakeel Butt 2023-12-12 5:58 ` Christoph Hellwig 2023-12-14 6:20 ` patchwork-bot+netdevbpf 2023-12-14 6:20 ` patchwork-bot+netdevbpf 2023-12-14 6:48 ` Christoph Hellwig 2023-12-14 6:51 ` Mina Almasry 2023-12-14 6:51 ` Mina Almasry 2023-12-14 6:59 ` Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20231213074912.1275130-1-yinjun.zhang@corigine.com \ --to=yinjun.zhang@corigine.com \ --cc=almasrymina@google.com \ --cc=arnd@arndb.de \ --cc=bpf@vger.kernel.org \ --cc=christian.koenig@amd.com \ --cc=corbet@lwn.net \ --cc=davem@davemloft.net \ --cc=dri-devel@lists.freedesktop.org \ --cc=dsahern@kernel.org \ --cc=edumazet@google.com \ --cc=hawk@kernel.org \ --cc=hramamurthy@google.com \ --cc=ilias.apalodimas@linaro.org \ --cc=jeroendb@google.com \ --cc=kaiyuanz@google.com \ --cc=kuba@kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-media@vger.kernel.org \ --cc=linyunsheng@huawei.com \ --cc=netdev@vger.kernel.org \ --cc=pabeni@redhat.com \ --cc=pkaligineedi@google.com \ --cc=shailend@google.com \ --cc=shakeelb@google.com \ --cc=shuah@kernel.org \ --cc=sumit.semwal@linaro.org \ --cc=willemb@google.com \ --cc=willemdebruijn.kernel@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.