All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Max Gurtovoy <maxg@mellanox.com>,
	linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org
Cc: netdev@vger.kernel.org, Saeed Mahameed <saeedm@mellanox.com>,
	Or Gerlitz <ogerlitz@mellanox.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
Date: Thu, 6 Apr 2017 11:34:03 +0300	[thread overview]
Message-ID: <09b4dcb4-d0ab-43b7-5f1c-394ecfcce2f0@grimberg.me> (raw)
In-Reply-To: <86ed1762-a990-691f-e043-3d7dcac8fe85@mellanox.com>

> Hi Sagi,

Hey Max,

> the patchset looks good and of course we can add support for more
> drivers in the future.
> have you run some performance testing with the nvmf initiator ?

I'm limited by the target machine in terms of IOPs, but the host shows
~10% cpu usage decrease, and latency improves slightly as well
which is more apparent depending on which cpu I'm running my IO
thread (due to the mismatch in comp_vectors and queue mappings
some queues have irq vectors mapped to a core on a different numa
node.

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
To: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
	linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Saeed Mahameed <saeedm-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>,
	Or Gerlitz <ogerlitz-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>,
	Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Subject: Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
Date: Thu, 6 Apr 2017 11:34:03 +0300	[thread overview]
Message-ID: <09b4dcb4-d0ab-43b7-5f1c-394ecfcce2f0@grimberg.me> (raw)
In-Reply-To: <86ed1762-a990-691f-e043-3d7dcac8fe85-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

> Hi Sagi,

Hey Max,

> the patchset looks good and of course we can add support for more
> drivers in the future.
> have you run some performance testing with the nvmf initiator ?

I'm limited by the target machine in terms of IOPs, but the host shows
~10% cpu usage decrease, and latency improves slightly as well
which is more apparent depending on which cpu I'm running my IO
thread (due to the mismatch in comp_vectors and queue mappings
some queues have irq vectors mapped to a core on a different numa
node.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
Date: Thu, 6 Apr 2017 11:34:03 +0300	[thread overview]
Message-ID: <09b4dcb4-d0ab-43b7-5f1c-394ecfcce2f0@grimberg.me> (raw)
In-Reply-To: <86ed1762-a990-691f-e043-3d7dcac8fe85@mellanox.com>

> Hi Sagi,

Hey Max,

> the patchset looks good and of course we can add support for more
> drivers in the future.
> have you run some performance testing with the nvmf initiator ?

I'm limited by the target machine in terms of IOPs, but the host shows
~10% cpu usage decrease, and latency improves slightly as well
which is more apparent depending on which cpu I'm running my IO
thread (due to the mismatch in comp_vectors and queue mappings
some queues have irq vectors mapped to a core on a different numa
node.

  reply	other threads:[~2017-04-06  8:34 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-02 13:41 [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma Sagi Grimberg
2017-04-02 13:41 ` Sagi Grimberg
2017-04-02 13:41 ` [PATCH rfc 1/6] mlx5: convert to generic pci_alloc_irq_vectors Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:27   ` Christoph Hellwig
2017-04-04  6:27     ` Christoph Hellwig
2017-04-02 13:41 ` [PATCH rfc 2/6] mlx5: move affinity hints assignments to generic code Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:32   ` Christoph Hellwig
2017-04-04  6:32     ` Christoph Hellwig
2017-04-06  8:29     ` Sagi Grimberg
2017-04-06  8:29       ` Sagi Grimberg
2017-04-02 13:41 ` [PATCH rfc 3/6] RDMA/core: expose affinity mappings per completion vector Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:32   ` Christoph Hellwig
2017-04-04  6:32     ` Christoph Hellwig
2017-04-02 13:41 ` [PATCH rfc 4/6] mlx5: support ->get_vector_affinity Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:33   ` Christoph Hellwig
2017-04-04  6:33     ` Christoph Hellwig
2017-04-02 13:41 ` [PATCH rfc 5/6] block: Add rdma affinity based queue mapping helper Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:33   ` Christoph Hellwig
2017-04-04  6:33     ` Christoph Hellwig
2017-04-04  7:46   ` Max Gurtovoy
2017-04-04  7:46     ` Max Gurtovoy
2017-04-04  7:46     ` Max Gurtovoy
2017-04-04  7:46     ` Max Gurtovoy
2017-04-04 13:09     ` Christoph Hellwig
2017-04-04 13:09       ` Christoph Hellwig
2017-04-06  9:23     ` Sagi Grimberg
2017-04-06  9:23       ` Sagi Grimberg
2017-04-06  9:23       ` Sagi Grimberg
2017-04-05 14:17   ` Jens Axboe
2017-04-05 14:17     ` Jens Axboe
2017-04-02 13:41 ` [PATCH rfc 6/6] nvme-rdma: use intelligent affinity based queue mappings Sagi Grimberg
2017-04-02 13:41   ` Sagi Grimberg
2017-04-04  6:34   ` Christoph Hellwig
2017-04-04  6:34     ` Christoph Hellwig
2017-04-06  8:30     ` Sagi Grimberg
2017-04-06  8:30       ` Sagi Grimberg
2017-04-04  7:51 ` [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma Max Gurtovoy
2017-04-04  7:51   ` Max Gurtovoy
2017-04-04  7:51   ` Max Gurtovoy
2017-04-04  7:51   ` Max Gurtovoy
2017-04-06  8:34   ` Sagi Grimberg [this message]
2017-04-06  8:34     ` Sagi Grimberg
2017-04-06  8:34     ` Sagi Grimberg
2017-04-10 18:05 ` Steve Wise
2017-04-10 18:05   ` Steve Wise
2017-04-12  6:34   ` Christoph Hellwig
2017-04-12  6:34     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=09b4dcb4-d0ab-43b7-5f1c-394ecfcce2f0@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maxg@mellanox.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@mellanox.com \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.