All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mina Almasry <almasrymina@google.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: "David Ahern" <dsahern@kernel.org>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linaro-mm-sig@lists.linaro.org,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
	"Arnd Bergmann" <arnd@arndb.de>,
	"Willem de Bruijn" <willemdebruijn.kernel@gmail.com>,
	"Shuah Khan" <shuah@kernel.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Shakeel Butt" <shakeelb@google.com>,
	"Jeroen de Borst" <jeroendb@google.com>,
	"Praveen Kaligineedi" <pkaligineedi@google.com>,
	"Willem de Bruijn" <willemb@google.com>,
	"Kaiyuan Zhang" <kaiyuanz@google.com>
Subject: Re: [RFC PATCH v3 05/12] netdev: netdevice devmem allocator
Date: Wed, 8 Nov 2023 17:41:56 -0800	[thread overview]
Message-ID: <CAHS8izPgioCzFGadNFNFWr_tqi--YBF8qrNqi8ELgixA9ZX0rQ@mail.gmail.com> (raw)
In-Reply-To: <6c629d6d-6927-3857-edaa-1971a94b6e93@huawei.com>

> > On Mon, Nov 6, 2023 at 11:45 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2023/11/6 10:44, Mina Almasry wrote:
> >>> +
> >>> +void netdev_free_devmem(struct page_pool_iov *ppiov)
> >>> +{
> >>> +     struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
> >>> +
> >>> +     refcount_set(&ppiov->refcount, 1);
> >>> +
> >>> +     if (gen_pool_has_addr(binding->chunk_pool,
> >>> +                           page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
> >>
> >> When gen_pool_has_addr() returns false, does it mean something has gone
> >> really wrong here?
> >>
> >
> > Yes, good eye. gen_pool_has_addr() should never return false, but then
> > again, gen_pool_free()  BUG_ON()s if it doesn't find the address,
> > which is an extremely severe reaction to what can be a minor bug in
> > the accounting. I prefer to leak rather than crash the machine. It's a
> > bit of defensive programming that is normally frowned upon, but I feel
> > like in this case it's maybe warranted due to the very severe reaction
> > (BUG_ON).
>
> I would argue that why is the above defensive programming not done in the
> gen_pool core:)
>

I think gen_pool is not really not that new, and suggesting removing
the BUG_ONs must have been proposed before and rejected. I'll try to
do some research and maybe suggest downgrading the BUG_ON to WARN_ON,
but my guess is there is some reason the maintainer wants it to be a
BUG_ON.

On Wed, Nov 8, 2023 at 5:00 PM David Wei <dw@davidwei.uk> wrote:
>
> On 2023-11-07 14:55, David Ahern wrote:
> > On 11/7/23 3:10 PM, Mina Almasry wrote:
> >> On Mon, Nov 6, 2023 at 3:44 PM David Ahern <dsahern@kernel.org> wrote:
> >>>
> >>> On 11/5/23 7:44 PM, Mina Almasry wrote:
> >>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> >>>> index eeeda849115c..1c351c138a5b 100644
> >>>> --- a/include/linux/netdevice.h
> >>>> +++ b/include/linux/netdevice.h
> >>>> @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding {
> >>>>  };
> >>>>
> >>>>  #ifdef CONFIG_DMA_SHARED_BUFFER
> >>>> +struct page_pool_iov *
> >>>> +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding);
> >>>> +void netdev_free_devmem(struct page_pool_iov *ppiov);
> >>>
> >>> netdev_{alloc,free}_dmabuf?
> >>>
> >>
> >> Can do.
> >>
> >>> I say that because a dmabuf can be host memory, at least I am not aware
> >>> of a restriction that a dmabuf is device memory.
> >>>
> >>
> >> In my limited experience dma-buf is generally device memory, and
> >> that's really its use case. CONFIG_UDMABUF is a driver that mocks
> >> dma-buf with a memfd which I think is used for testing. But I can do
> >> the rename, it's more clear anyway, I think.
> >
> > config UDMABUF
> >         bool "userspace dmabuf misc driver"
> >         default n
> >         depends on DMA_SHARED_BUFFER
> >         depends on MEMFD_CREATE || COMPILE_TEST
> >         help
> >           A driver to let userspace turn memfd regions into dma-bufs.
> >           Qemu can use this to create host dmabufs for guest framebuffers.
> >
> >
> > Qemu is just a userspace process; it is no way a special one.
> >
> > Treating host memory as a dmabuf should radically simplify the io_uring
> > extension of this set. That the io_uring set needs to dive into
> > page_pools is just wrong - complicating the design and code and pushing
> > io_uring into a realm it does not need to be involved in.
>
> I think our io_uring proposal will already be vastly simplified once we
> rebase onto Kuba's page pool memory provider API. Using udmabuf means
> depending on a driver designed for testing, vs io_uring's registered
> buffers API that's been tried and tested.
>

FWIW I also get an impression that udmabuf is mostly targeting
testing, but I'm not aware of any deficiency that makes it concretely
unsuitable for you. You be the judge.

The only quirk of udmabuf I'm aware of is that it seems to cap the max
dma-buf size to 16000 pages. Not sure if that's due to a genuine
technical limitation or just convenience.

> I don't have an intuitive understanding of the trade offs yet, and would
> need to try out udmabuf and compare vs say using our own page pool
> memory provider.
>


On Wed, Nov 8, 2023 at 5:15 PM David Wei <dw@davidwei.uk> wrote:
> How would TCP devmem change if we no longer assume that dmabuf is device
> memory?

It wouldn't. The code already never assumes that dmabuf is device
memory. Any dma-buf should work, as far as I can tell. I'm also quite
confident udmabuf works, I use it for testing.

(Jason Gunthrope is much more of an expert and may chime in to say
'some dma-buf will not work'. My primitive understanding is that we're
using dma-bufs without any quirks and any dma-buf should work. I of
course haven't tested all dma-bufs :D)

> Pavel will know more on the perf side, but I wouldn't want to
> put any if/else on the hot path if we can avoid it. I could be wrong,
> but right now in my mind using different memory providers solves this
> neatly and the driver/networking stack doesn't need to care.
>
> Mina, I believe you said at NetDev conf that you already had an udmabuf
> implementation for testing. I would like to see this (you can send
> privately) to see how TCP devmem would handle both user memory and
> device memory.
>

There is nothing to send privately. The patch series you're looking at
supports udma-buf as-is, and the kselftest included with the series
demonstrates devmem TCP working with udmabuf.

The only thing missing from this series is the driver support. You can
see the GVE driver support for devmem TCP here:

https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3

You may need to implement devmem TCP for your driver before you can
reproduce udmabuf working for yourself, though.

-- 
Thanks,
Mina

WARNING: multiple messages have this Message-ID (diff)
From: Mina Almasry <almasrymina@google.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: "Kaiyuan Zhang" <kaiyuanz@google.com>,
	dri-devel@lists.freedesktop.org,
	"Eric Dumazet" <edumazet@google.com>,
	linux-kselftest@vger.kernel.org, "Shuah Khan" <shuah@kernel.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	linux-arch@vger.kernel.org,
	"Willem de Bruijn" <willemdebruijn.kernel@gmail.com>,
	"Jeroen de Borst" <jeroendb@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	linux-media@vger.kernel.org,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	"Arnd Bergmann" <arnd@arndb.de>,
	linaro-mm-sig@lists.linaro.org,
	"Shakeel Butt" <shakeelb@google.com>,
	"Willem de Bruijn" <willemb@google.com>,
	netdev@vger.kernel.org, "David Ahern" <dsahern@kernel.org>,
	"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
	linux-kernel@vger.kernel.org,
	"David S. Miller" <davem@davemloft.net>,
	"Praveen Kaligineedi" <pkaligineedi@google.com>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [RFC PATCH v3 05/12] netdev: netdevice devmem allocator
Date: Wed, 8 Nov 2023 17:41:56 -0800	[thread overview]
Message-ID: <CAHS8izPgioCzFGadNFNFWr_tqi--YBF8qrNqi8ELgixA9ZX0rQ@mail.gmail.com> (raw)
In-Reply-To: <6c629d6d-6927-3857-edaa-1971a94b6e93@huawei.com>

> > On Mon, Nov 6, 2023 at 11:45 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2023/11/6 10:44, Mina Almasry wrote:
> >>> +
> >>> +void netdev_free_devmem(struct page_pool_iov *ppiov)
> >>> +{
> >>> +     struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
> >>> +
> >>> +     refcount_set(&ppiov->refcount, 1);
> >>> +
> >>> +     if (gen_pool_has_addr(binding->chunk_pool,
> >>> +                           page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
> >>
> >> When gen_pool_has_addr() returns false, does it mean something has gone
> >> really wrong here?
> >>
> >
> > Yes, good eye. gen_pool_has_addr() should never return false, but then
> > again, gen_pool_free()  BUG_ON()s if it doesn't find the address,
> > which is an extremely severe reaction to what can be a minor bug in
> > the accounting. I prefer to leak rather than crash the machine. It's a
> > bit of defensive programming that is normally frowned upon, but I feel
> > like in this case it's maybe warranted due to the very severe reaction
> > (BUG_ON).
>
> I would argue that why is the above defensive programming not done in the
> gen_pool core:)
>

I think gen_pool is not really not that new, and suggesting removing
the BUG_ONs must have been proposed before and rejected. I'll try to
do some research and maybe suggest downgrading the BUG_ON to WARN_ON,
but my guess is there is some reason the maintainer wants it to be a
BUG_ON.

On Wed, Nov 8, 2023 at 5:00 PM David Wei <dw@davidwei.uk> wrote:
>
> On 2023-11-07 14:55, David Ahern wrote:
> > On 11/7/23 3:10 PM, Mina Almasry wrote:
> >> On Mon, Nov 6, 2023 at 3:44 PM David Ahern <dsahern@kernel.org> wrote:
> >>>
> >>> On 11/5/23 7:44 PM, Mina Almasry wrote:
> >>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> >>>> index eeeda849115c..1c351c138a5b 100644
> >>>> --- a/include/linux/netdevice.h
> >>>> +++ b/include/linux/netdevice.h
> >>>> @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding {
> >>>>  };
> >>>>
> >>>>  #ifdef CONFIG_DMA_SHARED_BUFFER
> >>>> +struct page_pool_iov *
> >>>> +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding);
> >>>> +void netdev_free_devmem(struct page_pool_iov *ppiov);
> >>>
> >>> netdev_{alloc,free}_dmabuf?
> >>>
> >>
> >> Can do.
> >>
> >>> I say that because a dmabuf can be host memory, at least I am not aware
> >>> of a restriction that a dmabuf is device memory.
> >>>
> >>
> >> In my limited experience dma-buf is generally device memory, and
> >> that's really its use case. CONFIG_UDMABUF is a driver that mocks
> >> dma-buf with a memfd which I think is used for testing. But I can do
> >> the rename, it's more clear anyway, I think.
> >
> > config UDMABUF
> >         bool "userspace dmabuf misc driver"
> >         default n
> >         depends on DMA_SHARED_BUFFER
> >         depends on MEMFD_CREATE || COMPILE_TEST
> >         help
> >           A driver to let userspace turn memfd regions into dma-bufs.
> >           Qemu can use this to create host dmabufs for guest framebuffers.
> >
> >
> > Qemu is just a userspace process; it is no way a special one.
> >
> > Treating host memory as a dmabuf should radically simplify the io_uring
> > extension of this set. That the io_uring set needs to dive into
> > page_pools is just wrong - complicating the design and code and pushing
> > io_uring into a realm it does not need to be involved in.
>
> I think our io_uring proposal will already be vastly simplified once we
> rebase onto Kuba's page pool memory provider API. Using udmabuf means
> depending on a driver designed for testing, vs io_uring's registered
> buffers API that's been tried and tested.
>

FWIW I also get an impression that udmabuf is mostly targeting
testing, but I'm not aware of any deficiency that makes it concretely
unsuitable for you. You be the judge.

The only quirk of udmabuf I'm aware of is that it seems to cap the max
dma-buf size to 16000 pages. Not sure if that's due to a genuine
technical limitation or just convenience.

> I don't have an intuitive understanding of the trade offs yet, and would
> need to try out udmabuf and compare vs say using our own page pool
> memory provider.
>


On Wed, Nov 8, 2023 at 5:15 PM David Wei <dw@davidwei.uk> wrote:
> How would TCP devmem change if we no longer assume that dmabuf is device
> memory?

It wouldn't. The code already never assumes that dmabuf is device
memory. Any dma-buf should work, as far as I can tell. I'm also quite
confident udmabuf works, I use it for testing.

(Jason Gunthrope is much more of an expert and may chime in to say
'some dma-buf will not work'. My primitive understanding is that we're
using dma-bufs without any quirks and any dma-buf should work. I of
course haven't tested all dma-bufs :D)

> Pavel will know more on the perf side, but I wouldn't want to
> put any if/else on the hot path if we can avoid it. I could be wrong,
> but right now in my mind using different memory providers solves this
> neatly and the driver/networking stack doesn't need to care.
>
> Mina, I believe you said at NetDev conf that you already had an udmabuf
> implementation for testing. I would like to see this (you can send
> privately) to see how TCP devmem would handle both user memory and
> device memory.
>

There is nothing to send privately. The patch series you're looking at
supports udma-buf as-is, and the kselftest included with the series
demonstrates devmem TCP working with udmabuf.

The only thing missing from this series is the driver support. You can
see the GVE driver support for devmem TCP here:

https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3

You may need to implement devmem TCP for your driver before you can
reproduce udmabuf working for yourself, though.

-- 
Thanks,
Mina

  reply	other threads:[~2023-11-09  1:42 UTC|newest]

Thread overview: 254+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-06  2:43 [RFC PATCH v3 00/12] Device Memory TCP Mina Almasry
2023-11-06  2:43 ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 01/12] net: page_pool: factor out releasing DMA from releasing the page Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 02/12] net: page_pool: create hooks for custom page providers Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-07  7:44   ` Yunsheng Lin
2023-11-07  7:44     ` Yunsheng Lin
2023-11-09 11:09   ` Paolo Abeni
2023-11-09 11:09     ` Paolo Abeni
2023-11-10 23:19   ` Jakub Kicinski
2023-11-10 23:19     ` Jakub Kicinski
2023-11-13  3:28     ` Mina Almasry
2023-11-13  3:28       ` Mina Almasry
2023-11-13 22:10       ` Jakub Kicinski
2023-11-13 22:10         ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 03/12] net: netdev netlink api to bind dma-buf to a net device Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-10 23:16   ` Jakub Kicinski
2023-11-10 23:16     ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 04/12] netdev: support binding dma-buf to netdevice Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-07  7:46   ` Yunsheng Lin
2023-11-07  7:46     ` Yunsheng Lin
2023-11-07 21:59     ` Mina Almasry
2023-11-07 21:59       ` Mina Almasry
2023-11-08  3:40       ` Yunsheng Lin
2023-11-08  3:40         ` Yunsheng Lin
2023-11-09  2:22         ` Mina Almasry
2023-11-09  2:22           ` Mina Almasry
2023-11-09  9:29           ` Yunsheng Lin
2023-11-09  9:29             ` Yunsheng Lin
2023-11-08 23:47   ` David Wei
2023-11-08 23:47     ` David Wei
2023-11-09  2:25     ` Mina Almasry
2023-11-09  2:25       ` Mina Almasry
2023-11-09  8:29   ` Paolo Abeni
2023-11-09  8:29     ` Paolo Abeni
2023-11-10  2:59     ` Mina Almasry
2023-11-10  2:59       ` Mina Almasry
2023-11-10  7:38       ` Yunsheng Lin
2023-11-10  7:38         ` Yunsheng Lin
2023-11-10  9:45         ` Mina Almasry
2023-11-10  9:45           ` Mina Almasry
2023-11-10 23:19   ` Jakub Kicinski
2023-11-10 23:19     ` Jakub Kicinski
2023-11-11  2:19     ` Mina Almasry
2023-11-11  2:19       ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 05/12] netdev: netdevice devmem allocator Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-06 23:44   ` David Ahern
2023-11-06 23:44     ` David Ahern
2023-11-07 22:10     ` Mina Almasry
2023-11-07 22:10       ` Mina Almasry
2023-11-07 22:55       ` David Ahern
2023-11-07 22:55         ` David Ahern
2023-11-07 23:03         ` Mina Almasry
2023-11-07 23:03           ` Mina Almasry
2023-11-09  1:15           ` David Wei
2023-11-09  1:15             ` David Wei
2023-11-10 14:26           ` Pavel Begunkov
2023-11-10 14:26             ` Pavel Begunkov
2023-11-11 17:19             ` David Ahern
2023-11-11 17:19               ` David Ahern
2023-11-14 16:09               ` Pavel Begunkov
2023-11-14 16:09                 ` Pavel Begunkov
2023-11-09  1:00         ` David Wei
2023-11-09  1:00           ` David Wei
2023-11-08  3:48       ` Yunsheng Lin
2023-11-08  3:48         ` Yunsheng Lin
2023-11-09  1:41         ` Mina Almasry [this message]
2023-11-09  1:41           ` Mina Almasry
2023-11-07  7:45   ` Yunsheng Lin
2023-11-07  7:45     ` Yunsheng Lin
2023-11-09  8:44   ` Paolo Abeni
2023-11-09  8:44     ` Paolo Abeni
2023-11-06  2:44 ` [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-06 21:02   ` Stanislav Fomichev
2023-11-06 21:02     ` Stanislav Fomichev
2023-11-06 23:49   ` David Ahern
2023-11-06 23:49     ` David Ahern
2023-11-08  0:02     ` Mina Almasry
2023-11-08  0:02       ` Mina Almasry
2023-11-08  0:10       ` David Ahern
2023-11-08  0:10         ` David Ahern
2023-11-10 23:16   ` Jakub Kicinski
2023-11-10 23:16     ` Jakub Kicinski
2023-11-13  4:54     ` Mina Almasry
2023-11-13  4:54       ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 07/12] page-pool: device memory support Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-07  8:00   ` Yunsheng Lin
2023-11-07  8:00     ` Yunsheng Lin
2023-11-07 21:56     ` Mina Almasry
2023-11-07 21:56       ` Mina Almasry
2023-11-08 10:56       ` Yunsheng Lin
2023-11-08 10:56         ` Yunsheng Lin
2023-11-09  3:20         ` Mina Almasry
2023-11-09  3:20           ` Mina Almasry
2023-11-09  9:30           ` Yunsheng Lin
2023-11-09  9:30             ` Yunsheng Lin
2023-11-09 12:20             ` Mina Almasry
2023-11-09 12:20               ` Mina Almasry
2023-11-09 13:23               ` Yunsheng Lin
2023-11-09 13:23                 ` Yunsheng Lin
2023-11-09 14:23           ` Christian König
2023-11-09  9:01   ` Paolo Abeni
2023-11-09  9:01     ` Paolo Abeni
2023-11-06  2:44 ` [RFC PATCH v3 08/12] net: support non paged skb frags Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-07  9:00   ` Yunsheng Lin
2023-11-07  9:00     ` Yunsheng Lin
2023-11-07 21:19     ` Mina Almasry
2023-11-07 21:19       ` Mina Almasry
2023-11-08 11:25       ` Yunsheng Lin
2023-11-08 11:25         ` Yunsheng Lin
2023-11-09  9:14   ` Paolo Abeni
2023-11-09  9:14     ` Paolo Abeni
2023-11-10  4:06     ` Mina Almasry
2023-11-10  4:06       ` Mina Almasry
2023-11-10 23:19   ` Jakub Kicinski
2023-11-10 23:19     ` Jakub Kicinski
2023-11-13  6:05     ` Mina Almasry
2023-11-13  6:05       ` Mina Almasry
2023-11-13 22:17       ` Jakub Kicinski
2023-11-13 22:17         ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 09/12] net: add support for skbs with unreadable frags Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-06 18:47   ` Stanislav Fomichev
2023-11-06 18:47     ` Stanislav Fomichev
2023-11-06 19:34     ` David Ahern
2023-11-06 19:34       ` David Ahern
2023-11-06 20:31       ` Mina Almasry
2023-11-06 20:31         ` Mina Almasry
2023-11-06 21:59         ` Stanislav Fomichev
2023-11-06 21:59           ` Stanislav Fomichev
2023-11-06 22:18           ` Mina Almasry
2023-11-06 22:18             ` Mina Almasry
2023-11-06 22:59             ` Stanislav Fomichev
2023-11-06 22:59               ` Stanislav Fomichev
2023-11-06 23:14               ` Kaiyuan Zhang
2023-11-06 23:27               ` Mina Almasry
2023-11-06 23:27                 ` Mina Almasry
2023-11-06 23:55                 ` Stanislav Fomichev
2023-11-06 23:55                   ` Stanislav Fomichev
2023-11-07  0:07                   ` Willem de Bruijn
2023-11-07  0:07                     ` Willem de Bruijn
2023-11-07  0:14                     ` Stanislav Fomichev
2023-11-07  0:14                       ` Stanislav Fomichev
2023-11-07  0:59                       ` Stanislav Fomichev
2023-11-07  0:59                         ` Stanislav Fomichev
2023-11-07  2:23                         ` Willem de Bruijn
2023-11-07  2:23                           ` Willem de Bruijn
2023-11-07 17:44                           ` Stanislav Fomichev
2023-11-07 17:44                             ` Stanislav Fomichev
2023-11-07 17:57                             ` Willem de Bruijn
2023-11-07 17:57                               ` Willem de Bruijn
2023-11-07 18:14                               ` Stanislav Fomichev
2023-11-07 18:14                                 ` Stanislav Fomichev
2023-11-07  0:20                     ` Mina Almasry
2023-11-07  0:20                       ` Mina Almasry
2023-11-07  1:06                       ` Stanislav Fomichev
2023-11-07  1:06                         ` Stanislav Fomichev
2023-11-07 19:53                         ` Mina Almasry
2023-11-07 19:53                           ` Mina Almasry
2023-11-07 21:05                           ` Stanislav Fomichev
2023-11-07 21:05                             ` Stanislav Fomichev
2023-11-07 21:17                             ` Eric Dumazet
2023-11-07 21:17                               ` Eric Dumazet
2023-11-07 22:23                               ` Stanislav Fomichev
2023-11-07 22:23                                 ` Stanislav Fomichev
2023-11-10 23:17                                 ` Jakub Kicinski
2023-11-10 23:17                                   ` Jakub Kicinski
2023-11-10 23:19                           ` Jakub Kicinski
2023-11-10 23:19                             ` Jakub Kicinski
2023-11-07  1:09                       ` David Ahern
2023-11-07  1:09                         ` David Ahern
2023-11-06 23:37             ` David Ahern
2023-11-06 23:37               ` David Ahern
2023-11-07  0:03               ` Mina Almasry
2023-11-07  0:03                 ` Mina Almasry
2023-11-06 20:56   ` Stanislav Fomichev
2023-11-06 20:56     ` Stanislav Fomichev
2023-11-07  0:16   ` David Ahern
2023-11-07  0:16     ` David Ahern
2023-11-07  0:23     ` Mina Almasry
2023-11-07  0:23       ` Mina Almasry
2023-11-08 14:43   ` David Laight
2023-11-08 14:43     ` David Laight
2023-11-06  2:44 ` [RFC PATCH v3 10/12] tcp: RX path for devmem TCP Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-06 18:44   ` Stanislav Fomichev
2023-11-06 18:44     ` Stanislav Fomichev
2023-11-06 19:29     ` Mina Almasry
2023-11-06 19:29       ` Mina Almasry
2023-11-06 21:14       ` Willem de Bruijn
2023-11-06 21:14         ` Willem de Bruijn
2023-11-06 22:34         ` Stanislav Fomichev
2023-11-06 22:34           ` Stanislav Fomichev
2023-11-06 22:55           ` Willem de Bruijn
2023-11-06 22:55             ` Willem de Bruijn
2023-11-06 23:32             ` Stanislav Fomichev
2023-11-06 23:32               ` Stanislav Fomichev
2023-11-06 23:55               ` David Ahern
2023-11-06 23:55                 ` David Ahern
2023-11-07  0:02                 ` Willem de Bruijn
2023-11-07  0:02                   ` Willem de Bruijn
2023-11-07 23:55                   ` Mina Almasry
2023-11-07 23:55                     ` Mina Almasry
2023-11-08  0:01                     ` David Ahern
2023-11-08  0:01                       ` David Ahern
2023-11-09  2:39                       ` Mina Almasry
2023-11-09  2:39                         ` Mina Almasry
2023-11-09 16:07                         ` Edward Cree
2023-11-09 16:07                           ` Edward Cree
2023-12-08 20:12                           ` Pavel Begunkov
2023-12-08 20:12                             ` Pavel Begunkov
2023-11-09 11:05             ` Paolo Abeni
2023-11-09 11:05               ` Paolo Abeni
2023-11-10 23:16               ` Jakub Kicinski
2023-11-10 23:16                 ` Jakub Kicinski
2023-12-08 20:28             ` Pavel Begunkov
2023-12-08 20:28               ` Pavel Begunkov
2023-12-08 20:09           ` Pavel Begunkov
2023-12-08 20:09             ` Pavel Begunkov
2023-11-06 21:17       ` Stanislav Fomichev
2023-11-06 21:17         ` Stanislav Fomichev
2023-11-08 15:36         ` Edward Cree
2023-11-08 15:36           ` Edward Cree
2023-11-09 10:52   ` Paolo Abeni
2023-11-09 10:52     ` Paolo Abeni
2023-11-10 23:19   ` Jakub Kicinski
2023-11-10 23:19     ` Jakub Kicinski
2023-11-06  2:44 ` [RFC PATCH v3 11/12] net: add SO_DEVMEM_DONTNEED setsockopt to release RX pages Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-06  2:44 ` [RFC PATCH v3 12/12] selftests: add ncdevmem, netcat for devmem TCP Mina Almasry
2023-11-06  2:44   ` Mina Almasry
2023-11-09 11:03   ` Paolo Abeni
2023-11-09 11:03     ` Paolo Abeni
2023-11-10 23:13   ` Jakub Kicinski
2023-11-10 23:13     ` Jakub Kicinski
2023-11-11  2:27     ` Mina Almasry
2023-11-11  2:27       ` Mina Almasry
2023-11-11  2:35       ` Jakub Kicinski
2023-11-11  2:35         ` Jakub Kicinski
2023-11-13  4:08         ` Mina Almasry
2023-11-13  4:08           ` Mina Almasry
2023-11-13 22:20           ` Jakub Kicinski
2023-11-13 22:20             ` Jakub Kicinski
2023-11-10 23:17   ` Jakub Kicinski
2023-11-10 23:17     ` Jakub Kicinski
2023-11-07 15:18 ` [RFC PATCH v3 00/12] Device Memory TCP David Ahern
2023-11-07 15:18   ` David Ahern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHS8izPgioCzFGadNFNFWr_tqi--YBF8qrNqi8ELgixA9ZX0rQ@mail.gmail.com \
    --to=almasrymina@google.com \
    --cc=arnd@arndb.de \
    --cc=christian.koenig@amd.com \
    --cc=davem@davemloft.net \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jeroendb@google.com \
    --cc=kaiyuanz@google.com \
    --cc=kuba@kernel.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pkaligineedi@google.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=willemb@google.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.