From: Boris Pismenny <borispismenny@gmail.com> To: Christoph Hellwig <hch@infradead.org>, Boris Pismenny <borisp@nvidia.com> Cc: dsahern@gmail.com, kuba@kernel.org, davem@davemloft.net, saeedm@nvidia.com, hch@lst.de, sagi@grimberg.me, axboe@fb.com, kbusch@kernel.org, viro@zeniv.linux.org.uk, edumazet@google.com, smalin@marvell.com, boris.pismenny@gmail.com, linux-nvme@lists.infradead.org, netdev@vger.kernel.org, benishay@nvidia.com, ogerlitz@nvidia.com, yorayz@nvidia.com, Boris Pismenny <borisp@mellanox.com>, Ben Ben-Ishay <benishay@mellanox.com>, Or Gerlitz <ogerlitz@mellanox.com>, Yoray Zack <yorayz@mellanox.com> Subject: Re: [PATCH v5 net-next 02/36] iov_iter: DDP copy to iter/pages Date: Thu, 22 Jul 2021 23:23:38 +0300 [thread overview] Message-ID: <6f7f96dc-f1e6-99d9-6ab4-920126615302@gmail.com> (raw) In-Reply-To: <YPlzHTnoxDinpOsP@infradead.org> On 22/07/2021 16:31, Christoph Hellwig wrote: >> +#ifdef CONFIG_ULP_DDP >> +size_t _ddp_copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i); >> +#endif >> size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); >> bool _copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i); >> size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); >> @@ -145,6 +148,16 @@ size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) >> return _copy_to_iter(addr, bytes, i); >> } >> >> +#ifdef CONFIG_ULP_DDP >> +static __always_inline __must_check >> +size_t ddp_copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) >> +{ >> + if (unlikely(!check_copy_size(addr, bytes, true))) >> + return 0; >> + return _ddp_copy_to_iter(addr, bytes, i); >> +} >> +#endif > > There is no need to ifdef out externs with conditional implementations, > or inlines using them. > >> +#ifdef CONFIG_ULP_DDP >> +static void ddp_memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len) > > Overly long line. > >> + char *to = kmap_atomic(page); >> + >> + if (to + offset != from) >> + memcpy(to + offset, from, len); >> + >> + kunmap_atomic(to); > > This looks completely bogus to any casual read, so please document why > it makes sense. And no, a magic, unexplained ddp in the name does not > count as explanation at all. Please think about a more useful name. This routine, like other changes in this file, replicates the logic in memcpy_to_page. The only difference is that "ddp" avoids copies when the copy source and destinations buffers are one and the same. These are then used by nvme-tcp (see skb_ddp_copy_datagram_iter in nvme-tcp) which receives SKBs from the NIC that already placed data in its destination, and this is the source for the name Direct Data Placement. I'd gladly take suggestions for better names, but this is the best we came up with so far. The reason we are doing it is to avoid modifying memcpy_to_page itself, but rather allow users (e.g., nvme-tcp) to access this functionality directly. > > Can this ever write to user page? If yes it needs a flush_dcache_page. Yes, will add. > > Last but not least: kmap_atomic is deprecated except for the very > rate use case where it is actually called from atomic context. Please > use kmap_local_page instead. > Will look into it, thanks! >> +#ifdef CONFIG_CRYPTO_HASH >> + struct ahash_request *hash = hashp; >> + struct scatterlist sg; >> + size_t copied; >> + >> + copied = ddp_copy_to_iter(addr, bytes, i); >> + sg_init_one(&sg, addr, copied); >> + ahash_request_set_crypt(hash, &sg, NULL, copied); >> + crypto_ahash_update(hash); >> + return copied; >> +#else >> + return 0; >> +#endif > > What is the point of this stub? To me it looks extremely dangerous. > As above, we use the same logic as in hash_and_copy_to_iter. The purpose is again to eventually avoid the copy in case the source and destination buffers are one and the same.
WARNING: multiple messages have this Message-ID (diff)
From: Boris Pismenny <borispismenny@gmail.com> To: Christoph Hellwig <hch@infradead.org>, Boris Pismenny <borisp@nvidia.com> Cc: dsahern@gmail.com, kuba@kernel.org, davem@davemloft.net, saeedm@nvidia.com, hch@lst.de, sagi@grimberg.me, axboe@fb.com, kbusch@kernel.org, viro@zeniv.linux.org.uk, edumazet@google.com, smalin@marvell.com, boris.pismenny@gmail.com, linux-nvme@lists.infradead.org, netdev@vger.kernel.org, benishay@nvidia.com, ogerlitz@nvidia.com, yorayz@nvidia.com, Boris Pismenny <borisp@mellanox.com>, Ben Ben-Ishay <benishay@mellanox.com>, Or Gerlitz <ogerlitz@mellanox.com>, Yoray Zack <yorayz@mellanox.com> Subject: Re: [PATCH v5 net-next 02/36] iov_iter: DDP copy to iter/pages Date: Thu, 22 Jul 2021 23:23:38 +0300 [thread overview] Message-ID: <6f7f96dc-f1e6-99d9-6ab4-920126615302@gmail.com> (raw) In-Reply-To: <YPlzHTnoxDinpOsP@infradead.org> On 22/07/2021 16:31, Christoph Hellwig wrote: >> +#ifdef CONFIG_ULP_DDP >> +size_t _ddp_copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i); >> +#endif >> size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); >> bool _copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i); >> size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); >> @@ -145,6 +148,16 @@ size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) >> return _copy_to_iter(addr, bytes, i); >> } >> >> +#ifdef CONFIG_ULP_DDP >> +static __always_inline __must_check >> +size_t ddp_copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) >> +{ >> + if (unlikely(!check_copy_size(addr, bytes, true))) >> + return 0; >> + return _ddp_copy_to_iter(addr, bytes, i); >> +} >> +#endif > > There is no need to ifdef out externs with conditional implementations, > or inlines using them. > >> +#ifdef CONFIG_ULP_DDP >> +static void ddp_memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len) > > Overly long line. > >> + char *to = kmap_atomic(page); >> + >> + if (to + offset != from) >> + memcpy(to + offset, from, len); >> + >> + kunmap_atomic(to); > > This looks completely bogus to any casual read, so please document why > it makes sense. And no, a magic, unexplained ddp in the name does not > count as explanation at all. Please think about a more useful name. This routine, like other changes in this file, replicates the logic in memcpy_to_page. The only difference is that "ddp" avoids copies when the copy source and destinations buffers are one and the same. These are then used by nvme-tcp (see skb_ddp_copy_datagram_iter in nvme-tcp) which receives SKBs from the NIC that already placed data in its destination, and this is the source for the name Direct Data Placement. I'd gladly take suggestions for better names, but this is the best we came up with so far. The reason we are doing it is to avoid modifying memcpy_to_page itself, but rather allow users (e.g., nvme-tcp) to access this functionality directly. > > Can this ever write to user page? If yes it needs a flush_dcache_page. Yes, will add. > > Last but not least: kmap_atomic is deprecated except for the very > rate use case where it is actually called from atomic context. Please > use kmap_local_page instead. > Will look into it, thanks! >> +#ifdef CONFIG_CRYPTO_HASH >> + struct ahash_request *hash = hashp; >> + struct scatterlist sg; >> + size_t copied; >> + >> + copied = ddp_copy_to_iter(addr, bytes, i); >> + sg_init_one(&sg, addr, copied); >> + ahash_request_set_crypt(hash, &sg, NULL, copied); >> + crypto_ahash_update(hash); >> + return copied; >> +#else >> + return 0; >> +#endif > > What is the point of this stub? To me it looks extremely dangerous. > As above, we use the same logic as in hash_and_copy_to_iter. The purpose is again to eventually avoid the copy in case the source and destination buffers are one and the same. _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-07-22 20:23 UTC|newest] Thread overview: 122+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-22 11:02 [PATCH v5 net-next 00/36] nvme-tcp receive and tarnsmit offloads Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 01/36] net: Introduce direct data placement tcp offload Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:26 ` Eric Dumazet 2021-07-22 11:26 ` Eric Dumazet 2021-07-22 12:18 ` Boris Pismenny 2021-07-22 12:18 ` Boris Pismenny 2021-07-22 13:10 ` Eric Dumazet 2021-07-22 13:10 ` Eric Dumazet 2021-07-22 13:33 ` Boris Pismenny 2021-07-22 13:33 ` Boris Pismenny 2021-07-22 13:39 ` Eric Dumazet 2021-07-22 13:39 ` Eric Dumazet 2021-07-22 14:02 ` Boris Pismenny 2021-07-22 14:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 02/36] iov_iter: DDP copy to iter/pages Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 13:31 ` Christoph Hellwig 2021-07-22 13:31 ` Christoph Hellwig 2021-07-22 20:23 ` Boris Pismenny [this message] 2021-07-22 20:23 ` Boris Pismenny 2021-07-23 5:03 ` Christoph Hellwig 2021-07-23 5:03 ` Christoph Hellwig 2021-07-23 5:21 ` Al Viro 2021-07-23 5:21 ` Al Viro 2021-08-04 14:13 ` Or Gerlitz 2021-08-04 14:13 ` Or Gerlitz 2021-08-10 13:29 ` Or Gerlitz 2021-08-10 13:29 ` Or Gerlitz 2021-07-22 20:55 ` Al Viro 2021-07-22 20:55 ` Al Viro 2021-07-22 11:02 ` [PATCH v5 net-next 03/36] net: skb copy(+hash) iterators for DDP offloads Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 04/36] net/tls: expose get_netdev_for_sock Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-23 6:06 ` Christoph Hellwig 2021-07-23 6:06 ` Christoph Hellwig 2021-08-04 13:26 ` Or Gerlitz 2021-08-04 13:26 ` Or Gerlitz [not found] ` <20210804072918.17ba9cff@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> 2021-08-04 15:07 ` Or Gerlitz 2021-08-10 13:25 ` Or Gerlitz 2021-07-22 11:02 ` [PATCH v5 net-next 05/36] nvme-tcp: Add DDP offload control path Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 06/36] nvme-tcp: Add DDP data-path Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 07/36] nvme-tcp: RX DDGST offload Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 08/36] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 09/36] net/mlx5: Header file changes for nvme-tcp offload Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:02 ` [PATCH v5 net-next 10/36] net/mlx5: Add 128B CQE for NVMEoTCP offload Boris Pismenny 2021-07-22 11:02 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 11/36] net/mlx5e: TCP flow steering for nvme-tcp Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 12/36] net/mlx5e: NVMEoTCP offload initialization Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 13/36] net/mlx5e: KLM UMR helper macros Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 14/36] net/mlx5e: NVMEoTCP use KLM UMRs Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 15/36] net/mlx5e: NVMEoTCP queue init/teardown Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 16/36] net/mlx5e: NVMEoTCP async ddp invalidation Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 17/36] net/mlx5e: NVMEoTCP ddp setup and resync Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 18/36] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 19/36] net/mlx5e: NVMEoTCP statistics Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 20/36] Documentation: add ULP DDP offload documentation Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 21/36] net: drop ULP DDP HW offload feature if no CSUM offload feature Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 22/36] net: Add ulp_ddp_pdu_info struct Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-23 19:42 ` Sagi Grimberg 2021-07-23 19:42 ` Sagi Grimberg 2021-07-22 11:03 ` [PATCH v5 net-next 23/36] net: Add to ulp_ddp support for fallback flow Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-23 6:09 ` Christoph Hellwig 2021-07-23 6:09 ` Christoph Hellwig 2021-07-22 11:03 ` [PATCH v5 net-next 24/36] net: Add MSG_DDP_CRC flag Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 14:23 ` Eric Dumazet 2021-07-22 14:23 ` Eric Dumazet 2021-07-22 11:03 ` [PATCH v5 net-next 25/36] nvme-tcp: TX DDGST offload Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 26/36] nvme-tcp: Mapping between Tx NVMEoTCP pdu and TCP sequence Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 27/36] mlx5e: make preparation in TLS code for NVMEoTCP CRC Tx offload Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 28/36] mlx5: Add sq state test bit for nvmeotcp Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 29/36] mlx5: Add support to NETIF_F_HW_TCP_DDP_CRC_TX feature Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 30/36] net/mlx5e: NVMEoTCP DDGST TX offload TIS Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 31/36] net/mlx5e: NVMEoTCP DDGST Tx offload queue init/teardown Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 32/36] net/mlx5e: NVMEoTCP DDGST TX BSF and PSV Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 33/36] net/mlx5e: NVMEoTCP DDGST TX Data path Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 34/36] net/mlx5e: NVMEoTCP DDGST TX handle OOO packets Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 35/36] net/mlx5e: NVMEoTCP DDGST TX offload optimization Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-22 11:03 ` [PATCH v5 net-next 36/36] net/mlx5e: NVMEoTCP DDGST TX statistics Boris Pismenny 2021-07-22 11:03 ` Boris Pismenny 2021-07-23 5:56 ` [PATCH v5 net-next 00/36] nvme-tcp receive and tarnsmit offloads Christoph Hellwig 2021-07-23 5:56 ` Christoph Hellwig 2021-07-23 19:58 ` Sagi Grimberg 2021-07-23 19:58 ` Sagi Grimberg 2021-08-04 13:51 ` Or Gerlitz 2021-08-04 13:51 ` Or Gerlitz 2021-08-06 19:46 ` Sagi Grimberg 2021-08-06 19:46 ` Sagi Grimberg 2021-08-10 13:37 ` Or Gerlitz 2021-08-10 13:37 ` Or Gerlitz
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=6f7f96dc-f1e6-99d9-6ab4-920126615302@gmail.com \ --to=borispismenny@gmail.com \ --cc=axboe@fb.com \ --cc=benishay@mellanox.com \ --cc=benishay@nvidia.com \ --cc=boris.pismenny@gmail.com \ --cc=borisp@mellanox.com \ --cc=borisp@nvidia.com \ --cc=davem@davemloft.net \ --cc=dsahern@gmail.com \ --cc=edumazet@google.com \ --cc=hch@infradead.org \ --cc=hch@lst.de \ --cc=kbusch@kernel.org \ --cc=kuba@kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=netdev@vger.kernel.org \ --cc=ogerlitz@mellanox.com \ --cc=ogerlitz@nvidia.com \ --cc=saeedm@nvidia.com \ --cc=sagi@grimberg.me \ --cc=smalin@marvell.com \ --cc=viro@zeniv.linux.org.uk \ --cc=yorayz@mellanox.com \ --cc=yorayz@nvidia.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.