From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757532AbbJ2UMf (ORCPT ); Thu, 29 Oct 2015 16:12:35 -0400 Received: from mga01.intel.com ([192.55.52.88]:30021 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750753AbbJ2UMb (ORCPT ); Thu, 29 Oct 2015 16:12:31 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,215,1444719600"; d="scan'208";a="838730578" From: Ross Zwisler To: linux-kernel@vger.kernel.org Cc: Ross Zwisler , "H. Peter Anvin" , "J. Bruce Fields" , "Theodore Ts'o" , Alexander Viro , Andreas Dilger , Dan Williams , Dave Chinner , Ingo Molnar , Jan Kara , Jeff Layton , Matthew Wilcox , Thomas Gleixner , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@ml01.01.org, x86@kernel.org, xfs@oss.sgi.com, Andrew Morton , Matthew Wilcox Subject: [RFC 03/11] pmem: enable REQ_FLUSH handling Date: Thu, 29 Oct 2015 14:12:07 -0600 Message-Id: <1446149535-16200-4-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1446149535-16200-1-git-send-email-ross.zwisler@linux.intel.com> References: <1446149535-16200-1-git-send-email-ross.zwisler@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the PMEM driver doesn't accept REQ_FLUSH bios. These are sent down via blkdev_issue_flush() in response to a fsync() or msync(). When we get an msync() or fsync() it is the responsibility of the DAX code to flush all dirty pages to media. The PMEM driver then just has issue a wmb_pmem() in response to the REQ_FLUSH to ensure that before we return all the flushed data has been durably stored on the media. Signed-off-by: Ross Zwisler --- drivers/nvdimm/pmem.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 0ba6a97..e1e222e 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -80,7 +80,7 @@ static void pmem_make_request(struct request_queue *q, struct bio *bio) if (do_acct) nd_iostat_end(bio, start); - if (bio_data_dir(bio)) + if (bio_data_dir(bio) || (bio->bi_rw & REQ_FLUSH)) wmb_pmem(); bio_endio(bio); @@ -189,6 +189,7 @@ static int pmem_attach_disk(struct device *dev, blk_queue_physical_block_size(pmem->pmem_queue, PAGE_SIZE); blk_queue_max_hw_sectors(pmem->pmem_queue, UINT_MAX); blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY); + blk_queue_flush(pmem->pmem_queue, REQ_FLUSH); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, pmem->pmem_queue); disk = alloc_disk(0); -- 2.1.0