From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from newverein.lst.de (verein.lst.de [213.95.11.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 7716C21E08286 for ; Sat, 10 Mar 2018 01:40:17 -0800 (PST) Date: Sat, 10 Mar 2018 10:46:33 +0100 From: Christoph Hellwig Subject: Re: [PATCH v5 02/11] xfs, dax: introduce xfs_dax_aops Message-ID: <20180310094633.GA31604@lst.de> References: <152066488891.40260.14605734226832760468.stgit@dwillia2-desk3.amr.corp.intel.com> <152066489984.40260.2215636951958334858.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <152066489984.40260.2215636951958334858.stgit@dwillia2-desk3.amr.corp.intel.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Dan Williams Cc: Jan Kara , Matthew Wilcox , linux-nvdimm@lists.01.org, Dave Chinner , linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig List-ID: > +int dax_set_page_dirty(struct page *page) > +{ > + /* > + * Unlike __set_page_dirty_no_writeback that handles dirty page > + * tracking in the page object, dax does all dirty tracking in > + * the inode address_space in response to mkwrite faults. In the > + * dax case we only need to worry about potentially dirty CPU > + * caches, not dirty page cache pages to write back. > + * > + * This callback is defined to prevent fallback to > + * __set_page_dirty_buffers() in set_page_dirty(). > + */ > + return 0; > +} Make this a generic noop_set_page_dirty maybe? > +EXPORT_SYMBOL(dax_set_page_dirty); > + > +void dax_invalidatepage(struct page *page, unsigned int offset, > + unsigned int length) > +{ > + /* > + * There is no page cache to invalidate in the dax case, however > + * we need this callback defined to prevent falling back to > + * block_invalidatepage() in do_invalidatepage(). > + */ > +} Same here. > +EXPORT_SYMBOL(dax_invalidatepage); And EXPORT_SYMBOL_GPL for anything dax-related, please. > +const struct address_space_operations xfs_dax_aops = { > + .writepages = xfs_vm_writepages, Please split out the DAX case from xfs_vm_writepages. This patch should probably also split into VFS and XFS parts. _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751966AbeCJJqg (ORCPT ); Sat, 10 Mar 2018 04:46:36 -0500 Received: from verein.lst.de ([213.95.11.211]:44699 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750884AbeCJJqf (ORCPT ); Sat, 10 Mar 2018 04:46:35 -0500 Date: Sat, 10 Mar 2018 10:46:33 +0100 From: Christoph Hellwig To: Dan Williams Cc: linux-nvdimm@lists.01.org, Jeff Moyer , Christoph Hellwig , Matthew Wilcox , Ross Zwisler , Jan Kara , Dave Chinner , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 02/11] xfs, dax: introduce xfs_dax_aops Message-ID: <20180310094633.GA31604@lst.de> References: <152066488891.40260.14605734226832760468.stgit@dwillia2-desk3.amr.corp.intel.com> <152066489984.40260.2215636951958334858.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <152066489984.40260.2215636951958334858.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > +int dax_set_page_dirty(struct page *page) > +{ > + /* > + * Unlike __set_page_dirty_no_writeback that handles dirty page > + * tracking in the page object, dax does all dirty tracking in > + * the inode address_space in response to mkwrite faults. In the > + * dax case we only need to worry about potentially dirty CPU > + * caches, not dirty page cache pages to write back. > + * > + * This callback is defined to prevent fallback to > + * __set_page_dirty_buffers() in set_page_dirty(). > + */ > + return 0; > +} Make this a generic noop_set_page_dirty maybe? > +EXPORT_SYMBOL(dax_set_page_dirty); > + > +void dax_invalidatepage(struct page *page, unsigned int offset, > + unsigned int length) > +{ > + /* > + * There is no page cache to invalidate in the dax case, however > + * we need this callback defined to prevent falling back to > + * block_invalidatepage() in do_invalidatepage(). > + */ > +} Same here. > +EXPORT_SYMBOL(dax_invalidatepage); And EXPORT_SYMBOL_GPL for anything dax-related, please. > +const struct address_space_operations xfs_dax_aops = { > + .writepages = xfs_vm_writepages, Please split out the DAX case from xfs_vm_writepages. This patch should probably also split into VFS and XFS parts.