All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sinan Kaya <okaya@codeaurora.org>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, timur@codeaurora.org
Cc: linux-arm-msm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Keith Busch <keith.busch@intel.com>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme: Acknowledge completion queue on each iteration
Date: Wed, 19 Jul 2017 06:37:22 -0400	[thread overview]
Message-ID: <933e9d49-ecfd-cc83-c116-29f97211480c@codeaurora.org> (raw)
In-Reply-To: <5595ca25-f616-c0f8-fb2c-241a951e8848@grimberg.me>

On 7/19/2017 5:20 AM, Sagi Grimberg wrote:
>> Code is moving the completion queue doorbell after processing all completed
>> events and sending callbacks to the block layer on each iteration.
>>
>> This is causing a performance drop when a lot of jobs are queued towards
>> the HW. Move the completion queue doorbell on each loop instead and allow new
>> jobs to be queued by the HW.
>>
>> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
>> ---
>>   drivers/nvme/host/pci.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>> index d10d2f2..33d9b5b 100644
>> --- a/drivers/nvme/host/pci.c
>> +++ b/drivers/nvme/host/pci.c
>> @@ -810,13 +810,12 @@ static void nvme_process_cq(struct nvme_queue *nvmeq)
>>         while (nvme_read_cqe(nvmeq, &cqe)) {
>>           nvme_handle_cqe(nvmeq, &cqe);
>> +        nvme_ring_cq_doorbell(nvmeq);
>>           consumed++;
>>       }
>>   -    if (consumed) {
>> -        nvme_ring_cq_doorbell(nvmeq);
>> +    if (consumed)
>>           nvmeq->cqe_seen = 1;
>> -    }
>>   }
> 
> Agree with Keith that this is definitely not the way to go, it
> adds mmio operations in the hot path with very little gain (if
> at all).
> 

Understood, different architectures might have different latency accessing the HW
registers. It might be expansive on some platform like you indicated and this change
would make it worse.

I'm doing a self NACK as well.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

WARNING: multiple messages have this Message-ID (diff)
From: okaya@codeaurora.org (Sinan Kaya)
Subject: [PATCH] nvme: Acknowledge completion queue on each iteration
Date: Wed, 19 Jul 2017 06:37:22 -0400	[thread overview]
Message-ID: <933e9d49-ecfd-cc83-c116-29f97211480c@codeaurora.org> (raw)
In-Reply-To: <5595ca25-f616-c0f8-fb2c-241a951e8848@grimberg.me>

On 7/19/2017 5:20 AM, Sagi Grimberg wrote:
>> Code is moving the completion queue doorbell after processing all completed
>> events and sending callbacks to the block layer on each iteration.
>>
>> This is causing a performance drop when a lot of jobs are queued towards
>> the HW. Move the completion queue doorbell on each loop instead and allow new
>> jobs to be queued by the HW.
>>
>> Signed-off-by: Sinan Kaya <okaya at codeaurora.org>
>> ---
>>   drivers/nvme/host/pci.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>> index d10d2f2..33d9b5b 100644
>> --- a/drivers/nvme/host/pci.c
>> +++ b/drivers/nvme/host/pci.c
>> @@ -810,13 +810,12 @@ static void nvme_process_cq(struct nvme_queue *nvmeq)
>>         while (nvme_read_cqe(nvmeq, &cqe)) {
>>           nvme_handle_cqe(nvmeq, &cqe);
>> +        nvme_ring_cq_doorbell(nvmeq);
>>           consumed++;
>>       }
>>   -    if (consumed) {
>> -        nvme_ring_cq_doorbell(nvmeq);
>> +    if (consumed)
>>           nvmeq->cqe_seen = 1;
>> -    }
>>   }
> 
> Agree with Keith that this is definitely not the way to go, it
> adds mmio operations in the hot path with very little gain (if
> at all).
> 

Understood, different architectures might have different latency accessing the HW
registers. It might be expansive on some platform like you indicated and this change
would make it worse.

I'm doing a self NACK as well.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

WARNING: multiple messages have this Message-ID (diff)
From: okaya@codeaurora.org (Sinan Kaya)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] nvme: Acknowledge completion queue on each iteration
Date: Wed, 19 Jul 2017 06:37:22 -0400	[thread overview]
Message-ID: <933e9d49-ecfd-cc83-c116-29f97211480c@codeaurora.org> (raw)
In-Reply-To: <5595ca25-f616-c0f8-fb2c-241a951e8848@grimberg.me>

On 7/19/2017 5:20 AM, Sagi Grimberg wrote:
>> Code is moving the completion queue doorbell after processing all completed
>> events and sending callbacks to the block layer on each iteration.
>>
>> This is causing a performance drop when a lot of jobs are queued towards
>> the HW. Move the completion queue doorbell on each loop instead and allow new
>> jobs to be queued by the HW.
>>
>> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
>> ---
>>   drivers/nvme/host/pci.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>> index d10d2f2..33d9b5b 100644
>> --- a/drivers/nvme/host/pci.c
>> +++ b/drivers/nvme/host/pci.c
>> @@ -810,13 +810,12 @@ static void nvme_process_cq(struct nvme_queue *nvmeq)
>>         while (nvme_read_cqe(nvmeq, &cqe)) {
>>           nvme_handle_cqe(nvmeq, &cqe);
>> +        nvme_ring_cq_doorbell(nvmeq);
>>           consumed++;
>>       }
>>   -    if (consumed) {
>> -        nvme_ring_cq_doorbell(nvmeq);
>> +    if (consumed)
>>           nvmeq->cqe_seen = 1;
>> -    }
>>   }
> 
> Agree with Keith that this is definitely not the way to go, it
> adds mmio operations in the hot path with very little gain (if
> at all).
> 

Understood, different architectures might have different latency accessing the HW
registers. It might be expansive on some platform like you indicated and this change
would make it worse.

I'm doing a self NACK as well.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

  reply	other threads:[~2017-07-19 10:37 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-17 22:36 [PATCH] nvme: Acknowledge completion queue on each iteration Sinan Kaya
2017-07-17 22:36 ` Sinan Kaya
2017-07-17 22:36 ` Sinan Kaya
2017-07-17 22:45 ` Keith Busch
2017-07-17 22:45   ` Keith Busch
2017-07-17 22:45   ` Keith Busch
2017-07-17 22:46   ` Sinan Kaya
2017-07-17 22:46     ` Sinan Kaya
2017-07-17 22:46     ` Sinan Kaya
2017-07-17 22:56     ` Keith Busch
2017-07-17 22:56       ` Keith Busch
2017-07-17 22:56       ` Keith Busch
2017-07-17 23:07       ` okaya
2017-07-17 23:07         ` okaya at codeaurora.org
2017-07-17 23:07         ` okaya
2017-07-18 14:36         ` Keith Busch
2017-07-18 14:36           ` Keith Busch
2017-07-18 14:36           ` Keith Busch
2017-07-18 18:52           ` Sinan Kaya
2017-07-18 18:52             ` Sinan Kaya
2017-07-18 18:52             ` Sinan Kaya
2017-07-18 21:26             ` Keith Busch
2017-07-18 21:26               ` Keith Busch
2017-07-18 21:26               ` Keith Busch
2017-07-19  9:20 ` Sagi Grimberg
2017-07-19  9:20   ` Sagi Grimberg
2017-07-19  9:20   ` Sagi Grimberg
2017-07-19 10:37   ` Sinan Kaya [this message]
2017-07-19 10:37     ` Sinan Kaya
2017-07-19 10:37     ` Sinan Kaya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=933e9d49-ecfd-cc83-c116-29f97211480c@codeaurora.org \
    --to=okaya@codeaurora.org \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=timur@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.