All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sinan Kaya <okaya@codeaurora.org>
To: Keith Busch <keith.busch@intel.com>
Cc: linux-nvme@lists.infradead.org, timur@codeaurora.org,
	linux-arm-msm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme: Acknowledge completion queue on each iteration
Date: Tue, 18 Jul 2017 14:52:26 -0400	[thread overview]
Message-ID: <dada875c-ed69-0126-8e83-78a564e6b4bd@codeaurora.org> (raw)
In-Reply-To: <20170718143617.GA7613@localhost.localdomain>

On 7/18/2017 10:36 AM, Keith Busch wrote:
> On Mon, Jul 17, 2017 at 07:07:00PM -0400, okaya@codeaurora.org wrote:
>> Maybe, I need to understand the design better. I was curious why completion
>> and submission queues were protected by a single lock causing lock
>> contention.
> Ideally the queues are tied to CPUs, so you couldn't have one thread
> submitting to a particular queue-pair while another thread is reaping
> completions from it. Such a setup wouldn't get lock contention.

I do see that the NVMe driver is creating a completion interrupt on
each CPU core for the completions. No problems with that. 

However, I don't think you can guarantee that there will always be a single
CPU core targeting one submission queue especially with asynchronous IO.

Lock contention counters from CONFIG_LOCK_STAT are pointing to nvmeq->lock
in my FIO tests.

Did I miss something?

> 
> Some machines have so many CPUs, though, that sharing hardware queues
> is required. We've experimented with separate submission and completion
> locks for such cases, but I've never seen an improved performance as a
> result.
> 

I have also experimented with multiple locks with no significant gains. 
However, I was curious if somebody else had a better implementation than mine.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

WARNING: multiple messages have this Message-ID (diff)
From: okaya@codeaurora.org (Sinan Kaya)
Subject: [PATCH] nvme: Acknowledge completion queue on each iteration
Date: Tue, 18 Jul 2017 14:52:26 -0400	[thread overview]
Message-ID: <dada875c-ed69-0126-8e83-78a564e6b4bd@codeaurora.org> (raw)
In-Reply-To: <20170718143617.GA7613@localhost.localdomain>

On 7/18/2017 10:36 AM, Keith Busch wrote:
> On Mon, Jul 17, 2017@07:07:00PM -0400, okaya@codeaurora.org wrote:
>> Maybe, I need to understand the design better. I was curious why completion
>> and submission queues were protected by a single lock causing lock
>> contention.
> Ideally the queues are tied to CPUs, so you couldn't have one thread
> submitting to a particular queue-pair while another thread is reaping
> completions from it. Such a setup wouldn't get lock contention.

I do see that the NVMe driver is creating a completion interrupt on
each CPU core for the completions. No problems with that. 

However, I don't think you can guarantee that there will always be a single
CPU core targeting one submission queue especially with asynchronous IO.

Lock contention counters from CONFIG_LOCK_STAT are pointing to nvmeq->lock
in my FIO tests.

Did I miss something?

> 
> Some machines have so many CPUs, though, that sharing hardware queues
> is required. We've experimented with separate submission and completion
> locks for such cases, but I've never seen an improved performance as a
> result.
> 

I have also experimented with multiple locks with no significant gains. 
However, I was curious if somebody else had a better implementation than mine.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

WARNING: multiple messages have this Message-ID (diff)
From: okaya@codeaurora.org (Sinan Kaya)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] nvme: Acknowledge completion queue on each iteration
Date: Tue, 18 Jul 2017 14:52:26 -0400	[thread overview]
Message-ID: <dada875c-ed69-0126-8e83-78a564e6b4bd@codeaurora.org> (raw)
In-Reply-To: <20170718143617.GA7613@localhost.localdomain>

On 7/18/2017 10:36 AM, Keith Busch wrote:
> On Mon, Jul 17, 2017 at 07:07:00PM -0400, okaya at codeaurora.org wrote:
>> Maybe, I need to understand the design better. I was curious why completion
>> and submission queues were protected by a single lock causing lock
>> contention.
> Ideally the queues are tied to CPUs, so you couldn't have one thread
> submitting to a particular queue-pair while another thread is reaping
> completions from it. Such a setup wouldn't get lock contention.

I do see that the NVMe driver is creating a completion interrupt on
each CPU core for the completions. No problems with that. 

However, I don't think you can guarantee that there will always be a single
CPU core targeting one submission queue especially with asynchronous IO.

Lock contention counters from CONFIG_LOCK_STAT are pointing to nvmeq->lock
in my FIO tests.

Did I miss something?

> 
> Some machines have so many CPUs, though, that sharing hardware queues
> is required. We've experimented with separate submission and completion
> locks for such cases, but I've never seen an improved performance as a
> result.
> 

I have also experimented with multiple locks with no significant gains. 
However, I was curious if somebody else had a better implementation than mine.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

  reply	other threads:[~2017-07-18 18:52 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-17 22:36 [PATCH] nvme: Acknowledge completion queue on each iteration Sinan Kaya
2017-07-17 22:36 ` Sinan Kaya
2017-07-17 22:36 ` Sinan Kaya
2017-07-17 22:45 ` Keith Busch
2017-07-17 22:45   ` Keith Busch
2017-07-17 22:45   ` Keith Busch
2017-07-17 22:46   ` Sinan Kaya
2017-07-17 22:46     ` Sinan Kaya
2017-07-17 22:46     ` Sinan Kaya
2017-07-17 22:56     ` Keith Busch
2017-07-17 22:56       ` Keith Busch
2017-07-17 22:56       ` Keith Busch
2017-07-17 23:07       ` okaya
2017-07-17 23:07         ` okaya at codeaurora.org
2017-07-17 23:07         ` okaya
2017-07-18 14:36         ` Keith Busch
2017-07-18 14:36           ` Keith Busch
2017-07-18 14:36           ` Keith Busch
2017-07-18 18:52           ` Sinan Kaya [this message]
2017-07-18 18:52             ` Sinan Kaya
2017-07-18 18:52             ` Sinan Kaya
2017-07-18 21:26             ` Keith Busch
2017-07-18 21:26               ` Keith Busch
2017-07-18 21:26               ` Keith Busch
2017-07-19  9:20 ` Sagi Grimberg
2017-07-19  9:20   ` Sagi Grimberg
2017-07-19  9:20   ` Sagi Grimberg
2017-07-19 10:37   ` Sinan Kaya
2017-07-19 10:37     ` Sinan Kaya
2017-07-19 10:37     ` Sinan Kaya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dada875c-ed69-0126-8e83-78a564e6b4bd@codeaurora.org \
    --to=okaya@codeaurora.org \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=timur@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.