All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvmet: avoid unnecessary fsync and flush bio
@ 2022-07-22 12:12 Guixin Liu
  2022-07-22 15:06 ` Christoph Hellwig
  0 siblings, 1 reply; 3+ messages in thread
From: Guixin Liu @ 2022-07-22 12:12 UTC (permalink / raw)
  To: hch, sagi, kch; +Cc: linux-nvme

For none buffered_io file backend and no volatile write cache
block device backend, fsync and flush bio are both unnecessary,
avoid to do that.

Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
---
 drivers/nvme/target/io-cmd-bdev.c | 11 +++++++++++
 drivers/nvme/target/io-cmd-file.c |  5 ++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 2dc1c10..64f3eb0 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -333,10 +333,16 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
 static void nvmet_bdev_execute_flush(struct nvmet_req *req)
 {
 	struct bio *bio = &req->b.inline_bio;
+	struct request_queue *q = req->ns->bdev->bd_queue;
 
 	if (!nvmet_check_transfer_len(req, 0))
 		return;
 
+	if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
+		nvmet_req_complete(req, NVME_SC_SUCCESS);
+		return;
+	}
+
 	bio_init(bio, req->ns->bdev, req->inline_bvec,
 		 ARRAY_SIZE(req->inline_bvec), REQ_OP_WRITE | REQ_PREFLUSH);
 	bio->bi_private = req;
@@ -347,6 +353,11 @@ static void nvmet_bdev_execute_flush(struct nvmet_req *req)
 
 u16 nvmet_bdev_flush(struct nvmet_req *req)
 {
+	struct request_queue *q = req->ns->bdev->bd_queue;
+
+	if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags))
+		return 0;
+
 	if (blkdev_issue_flush(req->ns->bdev))
 		return NVME_SC_INTERNAL | NVME_SC_DNR;
 	return 0;
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 64b47e2..801fb8a 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -268,7 +268,10 @@ static void nvmet_file_execute_rw(struct nvmet_req *req)
 
 u16 nvmet_file_flush(struct nvmet_req *req)
 {
-	return errno_to_nvme_status(req, vfs_fsync(req->ns->file, 1));
+	if (req->ns->buffered_io)
+		return errno_to_nvme_status(req, vfs_fsync(req->ns->file, 1));
+	else
+		return errno_to_nvme_status(req, 0);
 }
 
 static void nvmet_file_flush_work(struct work_struct *w)
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] nvmet: avoid unnecessary fsync and flush bio
  2022-07-22 12:12 [PATCH] nvmet: avoid unnecessary fsync and flush bio Guixin Liu
@ 2022-07-22 15:06 ` Christoph Hellwig
  2022-07-25  2:18   ` Guixin Liu
  0 siblings, 1 reply; 3+ messages in thread
From: Christoph Hellwig @ 2022-07-22 15:06 UTC (permalink / raw)
  To: Guixin Liu; +Cc: hch, sagi, kch, linux-nvme

On Fri, Jul 22, 2022 at 08:12:24PM +0800, Guixin Liu wrote:
> @@ -333,10 +333,16 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
>  static void nvmet_bdev_execute_flush(struct nvmet_req *req)
>  {
>  	struct bio *bio = &req->b.inline_bio;
> +	struct request_queue *q = req->ns->bdev->bd_queue;
>  
>  	if (!nvmet_check_transfer_len(req, 0))
>  		return;
>  
> +	if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {

This should be using a bdev_write_cache.

But more importantly:  if the backend does not have a volatile write
cache, and we should not even receive flushes from the remote side.

> +	if (req->ns->buffered_io)
> +		return errno_to_nvme_status(req, vfs_fsync(req->ns->file, 1));
> +	else
> +		return errno_to_nvme_status(req, 0);

No need for the else, and the simple success case can just use the
constant status and does not need a translation.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] nvmet: avoid unnecessary fsync and flush bio
  2022-07-22 15:06 ` Christoph Hellwig
@ 2022-07-25  2:18   ` Guixin Liu
  0 siblings, 0 replies; 3+ messages in thread
From: Guixin Liu @ 2022-07-25  2:18 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: sagi, kch, linux-nvme


在 2022/7/22 23:06, Christoph Hellwig 写道:
> On Fri, Jul 22, 2022 at 08:12:24PM +0800, Guixin Liu wrote:
>> @@ -333,10 +333,16 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
>>   static void nvmet_bdev_execute_flush(struct nvmet_req *req)
>>   {
>>   	struct bio *bio = &req->b.inline_bio;
>> +	struct request_queue *q = req->ns->bdev->bd_queue;
>>   
>>   	if (!nvmet_check_transfer_len(req, 0))
>>   		return;
>>   
>> +	if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
> This should be using a bdev_write_cache.
I will change that in the v2, thanks.
>
> But more importantly:  if the backend does not have a volatile write
> cache, and we should not even receive flushes from the remote side.

The vwc is in the ctroller data struct, not every each namespace. 
Currently, the vwc in

the nvmet_ctrl is always 1 to avoid some of the namespaces`s vwc are 
present, other namespaces`s

vwc are not present, therefore we can not set vwc = 0 to avoid receive 
flushes.

>
>> +	if (req->ns->buffered_io)
>> +		return errno_to_nvme_status(req, vfs_fsync(req->ns->file, 1));
>> +	else
>> +		return errno_to_nvme_status(req, 0);
> No need for the else, and the simple success case can just use the
> constant status and does not need a translation.

I will fix that in v2, thanks.

Best regards,

Guixin Liu



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-07-25  2:18 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-22 12:12 [PATCH] nvmet: avoid unnecessary fsync and flush bio Guixin Liu
2022-07-22 15:06 ` Christoph Hellwig
2022-07-25  2:18   ` Guixin Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.