All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Steve Wise" <swise@opengridcomputing.com>
To: "'Sagi Grimberg'" <sagi@grimberg.me>, "'Christoph Hellwig'" <hch@lst.de>
Cc: <linux-block@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	<linux-nvme@lists.infradead.org>,
	"'Max Gurtovoy'" <maxg@mellanox.com>
Subject: RE: [PATCH v2] block: fix rdma queue mapping
Date: Tue, 23 Oct 2018 16:31:04 -0500	[thread overview]
Message-ID: <030301d46b17$b43132d0$1c939870$@opengridcomputing.com> (raw)
In-Reply-To: <ab0370aa-c488-bfed-675a-b0f84d75a42a@grimberg.me>



> -----Original Message-----
> From: Sagi Grimberg <sagi@grimberg.me>
> Sent: Tuesday, October 23, 2018 4:25 PM
> To: Steve Wise <swise@opengridcomputing.com>; 'Christoph Hellwig'
> <hch@lst.de>
> Cc: linux-block@vger.kernel.org; linux-rdma@vger.kernel.org; linux-
> nvme@lists.infradead.org; 'Max Gurtovoy' <maxg@mellanox.com>
> Subject: Re: [PATCH v2] block: fix rdma queue mapping
> 
> 
> >>>> Christoph, Sagi:  it seems you think /proc/irq/$IRP/smp_affinity
> >>>> shouldn't be allowed if drivers support managed affinity. Is that
> correct?
> >>>
> >>> Not just shouldn't, but simply can't.
> >>>
> >>>> But as it stands, things are just plain borked if an rdma driver
> >>>> supports ib_get_vector_affinity() yet the admin changes the affinity via
> >>>> /proc...
> >>>
> >>> I think we need to fix ib_get_vector_affinity to not return anything
> >>> if the device doesn't use managed irq affinity.
> >>
> >> Steve, does iw_cxgb4 use managed affinity?
> >>
> >> I'll send a patch for mlx5 to simply not return anything as managed
> >> affinity is not something that the maintainers want to do.
> >
> > I'm beginning to think I don't know what "managed affinity" actually is.
> Currently iw_cxgb4 doesn't support ib_get_vector_affinity().  I have a patch
> for it, but ran into this whole issue with nvme failing if someone changes the
> affinity map via /proc.
> 
> That means that the pci subsystem gets your vector(s) affinity right and
> immutable. It also guarantees that you have reserved vectors and not get
> a best effort assignment when cpu cores are offlined.
> 
> You can simply enable it by adding PCI_IRQ_AFFINITY to
> pci_alloc_irq_vectors() or call pci_alloc_irq_vectors_affinity()
> to communicate post/pre vectors that don't participate in
> affinitization (nvme uses it for admin queue).
> 
> This way you can easily plug ->get_vector_affinity() to return
> pci_irq_get_affinity(dev, vector)
> 
> The original patch set from hch:
> https://lwn.net/Articles/693653/

Thanks for educating me. 😊  I'll have a look into this. 

Steve.

WARNING: multiple messages have this Message-ID (diff)
From: swise@opengridcomputing.com (Steve Wise)
Subject: [PATCH v2] block: fix rdma queue mapping
Date: Tue, 23 Oct 2018 16:31:04 -0500	[thread overview]
Message-ID: <030301d46b17$b43132d0$1c939870$@opengridcomputing.com> (raw)
In-Reply-To: <ab0370aa-c488-bfed-675a-b0f84d75a42a@grimberg.me>



> -----Original Message-----
> From: Sagi Grimberg <sagi at grimberg.me>
> Sent: Tuesday, October 23, 2018 4:25 PM
> To: Steve Wise <swise at opengridcomputing.com>; 'Christoph Hellwig'
> <hch at lst.de>
> Cc: linux-block at vger.kernel.org; linux-rdma at vger.kernel.org; linux-
> nvme at lists.infradead.org; 'Max Gurtovoy' <maxg at mellanox.com>
> Subject: Re: [PATCH v2] block: fix rdma queue mapping
> 
> 
> >>>> Christoph, Sagi:  it seems you think /proc/irq/$IRP/smp_affinity
> >>>> shouldn't be allowed if drivers support managed affinity. Is that
> correct?
> >>>
> >>> Not just shouldn't, but simply can't.
> >>>
> >>>> But as it stands, things are just plain borked if an rdma driver
> >>>> supports ib_get_vector_affinity() yet the admin changes the affinity via
> >>>> /proc...
> >>>
> >>> I think we need to fix ib_get_vector_affinity to not return anything
> >>> if the device doesn't use managed irq affinity.
> >>
> >> Steve, does iw_cxgb4 use managed affinity?
> >>
> >> I'll send a patch for mlx5 to simply not return anything as managed
> >> affinity is not something that the maintainers want to do.
> >
> > I'm beginning to think I don't know what "managed affinity" actually is.
> Currently iw_cxgb4 doesn't support ib_get_vector_affinity().  I have a patch
> for it, but ran into this whole issue with nvme failing if someone changes the
> affinity map via /proc.
> 
> That means that the pci subsystem gets your vector(s) affinity right and
> immutable. It also guarantees that you have reserved vectors and not get
> a best effort assignment when cpu cores are offlined.
> 
> You can simply enable it by adding PCI_IRQ_AFFINITY to
> pci_alloc_irq_vectors() or call pci_alloc_irq_vectors_affinity()
> to communicate post/pre vectors that don't participate in
> affinitization (nvme uses it for admin queue).
> 
> This way you can easily plug ->get_vector_affinity() to return
> pci_irq_get_affinity(dev, vector)
> 
> The original patch set from hch:
> https://lwn.net/Articles/693653/

Thanks for educating me. ?  I'll have a look into this. 

Steve.

  reply	other threads:[~2018-10-24  5:56 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-20 20:54 [PATCH v2] block: fix rdma queue mapping Sagi Grimberg
2018-08-20 20:54 ` Sagi Grimberg
2018-08-21  2:04 ` Ming Lei
2018-08-21  2:04   ` Ming Lei
2018-08-25  2:06   ` Sagi Grimberg
2018-08-25  2:06     ` Sagi Grimberg
2018-08-25 12:18     ` Steve Wise
2018-08-25 12:18       ` Steve Wise
2018-08-27  3:50       ` Ming Lei
2018-08-27  3:50         ` Ming Lei
2018-08-22 13:11 ` Christoph Hellwig
2018-08-22 13:11   ` Christoph Hellwig
2018-08-25  2:17   ` Sagi Grimberg
2018-08-25  2:17     ` Sagi Grimberg
2018-10-03 19:05     ` Steve Wise
2018-10-03 19:05       ` Steve Wise
2018-10-03 21:14       ` Sagi Grimberg
2018-10-03 21:14         ` Sagi Grimberg
2018-10-03 21:21         ` Steve Wise
2018-10-03 21:21           ` Steve Wise
2018-10-16  1:04         ` Sagi Grimberg
2018-10-16  1:04           ` Sagi Grimberg
2018-10-17 16:37           ` Christoph Hellwig
2018-10-17 16:37             ` Christoph Hellwig
2018-10-17 16:37       ` Christoph Hellwig
2018-10-17 16:37         ` Christoph Hellwig
2018-10-23  6:02         ` Sagi Grimberg
2018-10-23  6:02           ` Sagi Grimberg
2018-10-23 13:00           ` Steve Wise
2018-10-23 13:00             ` Steve Wise
2018-10-23 21:25             ` Sagi Grimberg
2018-10-23 21:25               ` Sagi Grimberg
2018-10-23 21:31               ` Steve Wise [this message]
2018-10-23 21:31                 ` Steve Wise
2018-10-24  0:09               ` Shiraz Saleem
2018-10-24  0:09                 ` Shiraz Saleem
2018-10-24  0:37                 ` Sagi Grimberg
2018-10-24  0:37                   ` Sagi Grimberg
2018-10-29 23:58                   ` Saleem, Shiraz
2018-10-29 23:58                     ` Saleem, Shiraz
2018-10-30 18:26                     ` Sagi Grimberg
2018-10-30 18:26                       ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='030301d46b17$b43132d0$1c939870$@opengridcomputing.com' \
    --to=swise@opengridcomputing.com \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maxg@mellanox.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.