linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	hch@lst.de, kbusch@kernel.org, axboe@fb.com,
	chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
	aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
	ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
	mgurtovoy@nvidia.com, edumazet@google.com, pabeni@redhat.com,
	dsahern@kernel.org, ast@kernel.org, jacob.e.keller@intel.com
Subject: Re: [PATCH v24 01/20] net: Introduce direct data placement tcp offload
Date: Fri, 26 Apr 2024 10:21:46 +0300	[thread overview]
Message-ID: <253o79wr3lh.fsf@mtr-vdi-124.i-did-not-set--mail-host-address--so-tickle-me> (raw)
In-Reply-To: <3ab22e14-35eb-473e-a821-6dbddea96254@grimberg.me>

Hi Sagi,

Sagi Grimberg <sagi@grimberg.me> writes:
>> +     config->io_cpu = sk->sk_incoming_cpu;
>> +     ret = netdev->netdev_ops->ulp_ddp_ops->sk_add(netdev, sk, config);
>
> Still don't understand why you need the io_cpu config if you are passing
> the sk to the driver...

With our HW we cannot move the offload queues to a different CPU without
destroying and recreating the offload resources on the new CPU.

Since the connection is created from a different CPU then the io queue
thread, we cannot predict which CPU we should create our offload context
on.

Ideally, io_cpu should be set to nvme_queue->io_cpu or it should be removed
and the socket should be offloaded from the io thread. What do you
prefer?

Thanks


  reply	other threads:[~2024-04-26  7:22 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-04 12:36 [PATCH v24 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-04-21 11:47   ` Sagi Grimberg
2024-04-26  7:21     ` Aurelien Aptel [this message]
2024-04-28  8:15       ` Sagi Grimberg
2024-04-29 11:35         ` Aurelien Aptel
2024-04-30 11:54           ` Sagi Grimberg
2024-05-02  7:04             ` Aurelien Aptel
2024-05-03  7:31               ` Sagi Grimberg
2024-05-06 12:28                 ` Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-04-15 14:28   ` Max Gurtovoy
2024-04-16 20:30   ` David Laight
2024-04-18  8:22     ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-04-21 11:45   ` Sagi Grimberg
2024-04-04 12:37 ` [PATCH v24 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-04-07 22:08   ` Sagi Grimberg
2024-04-10  6:31     ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-04-07 22:08   ` Sagi Grimberg
2024-04-10  6:31     ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-04-09  8:49   ` Bagas Sanjaya
2024-04-04 12:37 ` [PATCH v24 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-04-06  5:45 ` [PATCH v24 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-04-07 22:21   ` Sagi Grimberg
2024-04-09 22:35     ` Chaitanya Kulkarni
2024-04-09 22:59       ` Jakub Kicinski
2024-04-18  8:29         ` Chaitanya Kulkarni
2024-04-18 15:28           ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=253o79wr3lh.fsf@mtr-vdi-124.i-did-not-set--mail-host-address--so-tickle-me \
    --to=aaptel@nvidia.com \
    --cc=ast@kernel.org \
    --cc=aurelien.aptel@gmail.com \
    --cc=axboe@fb.com \
    --cc=borisp@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=galshalom@nvidia.com \
    --cc=hch@lst.de \
    --cc=jacob.e.keller@intel.com \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@nvidia.com \
    --cc=pabeni@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@nvidia.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).