From: Jason Gunthorpe <jgg@nvidia.com>
To: David Ahern <dsahern@gmail.com>
Cc: Parav Pandit <parav@nvidia.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
Jiri Pirko <jiri@nvidia.com>,
"dledford@redhat.com" <dledford@redhat.com>,
Leon Romanovsky <leonro@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>,
"kuba@kernel.org" <kuba@kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
Vu Pham <vuhuong@nvidia.com>
Subject: Re: [PATCH net-next 03/13] devlink: Support add and delete devlink port
Date: Wed, 18 Nov 2020 16:42:26 -0400 [thread overview]
Message-ID: <20201118204226.GB917484@nvidia.com> (raw)
In-Reply-To: <ac0c5a69-06e4-3809-c778-b27d6e437ed5@gmail.com>
On Wed, Nov 18, 2020 at 12:36:26PM -0700, David Ahern wrote:
> On 11/18/20 11:38 AM, Jason Gunthorpe wrote:
> > On Wed, Nov 18, 2020 at 11:03:24AM -0700, David Ahern wrote:
> >
> >> With Connectx-4 Lx for example the netdev can have at most 63 queues
> >
> > What netdev calls a queue is really a "can the device deliver
> > interrupts and packets to a given per-CPU queue" and covers a whole
> > spectrum of smaller limits like RSS scheme, # of available interrupts,
> > ability of the device to create queues, etc.
> >
> > CX4Lx can create a huge number of queues, but hits one of these limits
> > that mean netdev's specific usage can't scale up. Other stuff like
> > RDMA doesn't have the same limits, and has tonnes of queues.
> >
> > What seems to be needed is a resource controller concept like cgroup
> > has for processes. The system is really organized into a tree:
> >
> > physical device
> > mlx5_core
> > / | \ \ (aux bus)
> > netdev rdma vdpa SF etc
> > | (aux bus)
> > mlx5_core
> > / \ (aux bus)
> > netdev vdpa
> >
> > And it does make a lot of sense to start to talk about limits at each
> > tree level.
> >
> > eg the top of the tree may have 128 physical interrupts. With 128 CPU
> > cores that isn't enough interrupts to support all of those things
> > concurrently.
> >
> > So the user may want to configure:
> > - The first level netdev only gets 64,
> > - 3rd level mlx5_core gets 32
> > - Final level vdpa gets 8
> >
> > Other stuff has to fight it out with the remaining shared interrupts.
> >
> > In netdev land # of interrupts governs # of queues
> >
> > For RDMA # of interrupts limits the CPU affinities for queues
> >
> > VPDA limits the # of VMs that can use VT-d
> >
> > The same story repeats for other less general resources, mlx5 also
> > has consumption of limited BAR space, and consumption of some limited
> > memory elements. These numbers are much bigger and may not need
> > explicit governing, but the general concept holds.
> >
> > It would be very nice if the limit could be injected when the aux
> > device is created but before the driver is bound. I'm not sure how to
> > manage that though..
> >
> > I assume other devices will be different, maybe some devices have a
> > limit on the number of total queues, or a limit on the number of
> > VDPA or RDMA devices.
>
> A lot of low level resource details that need to be summarized into a
> nicer user / config perspective to specify limits / allocations.
Well, now that we have the aux bus stuff there is a nice natural place
to put things..
The aux bus owner device (mlx5_core) could have a list of available
resources
Each aux bus device (netdev/rdma/vdpa) could have a list of consumed
resources
Some API to place a limit on the consumed resources at each aux bus
device.
The tricky bit is the auto-probing/configure. By the time the user has
a chance to apply a limit the drivers are already bound and have
already done their setup. So each subsystem has to support dynamically
imposing a limit..
And I simplified things a bit above too, we actually have two kinds of
interrupt demand: sharable and dedicated. The actual need is to carve
out a bunch of dedicated interrupts and only allow subsystems that are
doing VT-d guest interrupt assignment to consume them (eg VDPA)
Jason
next prev parent reply other threads:[~2020-11-18 20:42 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-12 19:24 [PATCH net-next 00/13] Add mlx5 subfunction support Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 01/13] devlink: Prepare code to fill multiple port function attributes Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 02/13] devlink: Introduce PCI SF port flavour and port attribute Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 03/13] devlink: Support add and delete devlink port Parav Pandit
2020-11-18 16:21 ` David Ahern
2020-11-18 17:02 ` Parav Pandit
2020-11-18 18:03 ` David Ahern
2020-11-18 18:38 ` Jason Gunthorpe
2020-11-18 19:36 ` David Ahern
2020-11-18 20:42 ` Jason Gunthorpe [this message]
2020-11-18 19:22 ` Parav Pandit
2020-11-19 0:41 ` Jacob Keller
2020-11-19 1:17 ` David Ahern
2020-11-19 1:56 ` Samudrala, Sridhar
2020-11-19 0:52 ` Jacob Keller
2020-11-12 19:24 ` [PATCH net-next 04/13] devlink: Support get and set state of port function Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 05/13] devlink: Avoid global devlink mutex, use per instance reload lock Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 06/13] devlink: Introduce devlink refcount to reduce scope of global devlink_mutex Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 07/13] net/mlx5: SF, Add auxiliary device support Parav Pandit
2020-12-07 2:48 ` David Ahern
2020-12-07 4:53 ` Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 08/13] net/mlx5: SF, Add auxiliary device driver Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 09/13] net/mlx5: E-switch, Prepare eswitch to handle SF vport Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 10/13] net/mlx5: E-switch, Add eswitch helpers for " Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 11/13] net/mlx5: SF, Add SF configuration hardware commands Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 12/13] net/mlx5: SF, Add port add delete functionality Parav Pandit
2020-11-12 19:24 ` [PATCH net-next 13/13] net/mlx5: SF, Port function state change support Parav Pandit
2020-11-16 22:52 ` [PATCH net-next 00/13] Add mlx5 subfunction support Jakub Kicinski
2020-11-17 0:06 ` Saeed Mahameed
2020-11-17 1:58 ` Jakub Kicinski
2020-11-17 4:08 ` Parav Pandit
2020-11-17 17:11 ` Jakub Kicinski
2020-11-17 18:49 ` Jason Gunthorpe
2020-11-19 2:14 ` Jakub Kicinski
2020-11-19 4:35 ` David Ahern
2020-11-19 5:57 ` Saeed Mahameed
2020-11-20 1:31 ` Jakub Kicinski
2020-11-25 5:33 ` David Ahern
2020-11-25 6:00 ` Parav Pandit
2020-11-25 14:37 ` David Ahern
2020-11-20 1:29 ` Jakub Kicinski
2020-11-20 17:58 ` Alexander Duyck
2020-11-20 19:04 ` Samudrala, Sridhar
2020-11-23 21:51 ` Saeed Mahameed
2020-11-24 7:01 ` Jason Wang
2020-11-24 7:05 ` Jason Wang
2020-11-19 6:12 ` Saeed Mahameed
2020-11-19 8:25 ` Parav Pandit
2020-11-20 1:35 ` Jakub Kicinski
2020-11-20 3:34 ` Parav Pandit
2020-11-17 18:50 ` Parav Pandit
2020-11-19 2:23 ` Jakub Kicinski
2020-11-19 6:22 ` Saeed Mahameed
2020-11-19 14:00 ` Jason Gunthorpe
2020-11-20 3:35 ` Jakub Kicinski
2020-11-20 3:50 ` Parav Pandit
2020-11-20 16:16 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201118204226.GB917484@nvidia.com \
--to=jgg@nvidia.com \
--cc=davem@davemloft.net \
--cc=dledford@redhat.com \
--cc=dsahern@gmail.com \
--cc=gregkh@linuxfoundation.org \
--cc=jiri@nvidia.com \
--cc=kuba@kernel.org \
--cc=leonro@nvidia.com \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=vuhuong@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).