linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Marciniszyn, Mike" <mike.marciniszyn@cornelisnetworks.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
	"Dalessandro, Dennis" <dennis.dalessandro@cornelisnetworks.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	Doug Ledford <dledford@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"Pine, Kevin" <kevin.pine@cornelisnetworks.com>
Subject: RE: [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists allocations
Date: Mon, 28 Jun 2021 21:59:48 +0000	[thread overview]
Message-ID: <CH0PR01MB7153F90EA5FAD6C18D361CC4F2039@CH0PR01MB7153.prod.exchangelabs.com> (raw)
In-Reply-To: <20210525142048.GZ1002214@nvidia.com>

>
> Fine, but the main question is if you can use normal memory policy settings, not
> this.
>
> Jason

Our performance team has gotten some preliminary data on AMD platforms.

I prepared a kernel that will using allocate the QP using the "local" numa node (as currently done) and an allocation that intentionally allocates on the opposite socket based on a module parameter and our internal tests were executed with progressively larger queue pair counts.

In the second case on 64 core/socket AMD platforms, we are seeing with the intentionally opposite allocation, latency dropped ~6-7% and BW dropped ~13% on high queue count perftest.

SKX impact is minimal if any, but we need to look at legacy Intel chips that preceded SKX.   We are still reviewing the data and expanding the test to older chips.

Our theory is the hfi1 interrupt receive processing is fetching cachelines between the sockets causing the slowdown.   The receive processing is critical for hfi1 (and qib before that).    This is a heavily tuned code path.

To answer some of the pending questions posed before, the mempolicy looks to be a process relative control and does not apply to our QP allocation where the struct rvt_qp is in the kernel.  It certainly does not apply to kernel ULPs such as those created by say Lustre, ipoib, SRP, iSer, and NFS RDMA.

We do support comp_vector stuff, but that distributes completion processing.  Completions are triggered in our receive processing but to a much less extent based on ULP choices and packet type.    From a strategy standpoint, the code assumes distribution of kernel receive interrupt processing is vectored either by irqbalance or by explicit user mode scripting to spread RC QP receive processing across CPUs on the local socket.

Mike
External recipient

  parent reply	other threads:[~2021-06-28 21:59 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-11 10:36 [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists allocations Leon Romanovsky
2021-05-11 10:59 ` Haakon Bugge
2021-05-11 12:34   ` Leon Romanovsky
2021-05-11 19:15     ` Marciniszyn, Mike
2021-05-11 19:27       ` Leon Romanovsky
2021-05-11 19:39         ` Marciniszyn, Mike
2021-05-12  4:08         ` Dennis Dalessandro
2021-05-12 12:13           ` Leon Romanovsky
2021-05-12 12:45             ` Dennis Dalessandro
2021-05-11 12:26 ` Dennis Dalessandro
2021-05-11 12:34   ` Leon Romanovsky
2021-05-12 12:25     ` Marciniszyn, Mike
2021-05-12 12:50       ` Leon Romanovsky
2021-05-13 19:03         ` Dennis Dalessandro
2021-05-13 19:15           ` Jason Gunthorpe
2021-05-13 19:31             ` Dennis Dalessandro
2021-05-14 13:02               ` Jason Gunthorpe
2021-05-14 14:07                 ` Dennis Dalessandro
2021-05-14 14:35                   ` Jason Gunthorpe
2021-05-14 15:00                     ` Marciniszyn, Mike
2021-05-14 15:02                       ` Jason Gunthorpe
2021-05-19  7:50                         ` Leon Romanovsky
2021-05-19 11:56                           ` Dennis Dalessandro
2021-05-19 18:29                             ` Jason Gunthorpe
2021-05-19 19:49                               ` Dennis Dalessandro
2021-05-19 20:26                                 ` Jason Gunthorpe
2021-05-20 22:02                                   ` Dennis Dalessandro
2021-05-21  6:29                                     ` Leon Romanovsky
2021-05-25 13:13                                     ` Jason Gunthorpe
2021-05-25 14:10                                       ` Dennis Dalessandro
2021-05-25 14:20                                         ` Jason Gunthorpe
2021-05-25 14:29                                           ` Dennis Dalessandro
2021-06-28 21:59                                           ` Marciniszyn, Mike [this message]
2021-06-28 23:19                                             ` Jason Gunthorpe
2021-07-04  6:34                                               ` Leon Romanovsky
2021-06-02  4:33                                         ` Leon Romanovsky
2021-05-16 10:56           ` Leon Romanovsky
2021-05-12 12:23 ` Marciniszyn, Mike

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CH0PR01MB7153F90EA5FAD6C18D361CC4F2039@CH0PR01MB7153.prod.exchangelabs.com \
    --to=mike.marciniszyn@cornelisnetworks.com \
    --cc=dennis.dalessandro@cornelisnetworks.com \
    --cc=dledford@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kevin.pine@cornelisnetworks.com \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).