From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751325AbdBBK45 (ORCPT ); Thu, 2 Feb 2017 05:56:57 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:40746 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750972AbdBBK44 (ORCPT ); Thu, 2 Feb 2017 05:56:56 -0500 Date: Thu, 2 Feb 2017 02:56:51 -0800 From: Christoph Hellwig To: Al Viro Cc: Jeff Layton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, lustre-devel@lists.lustre.org, v9fs-developer@lists.sourceforge.net, Linus Torvalds , Jan Kara , Chris Wilson , "Kirill A. Shutemov" Subject: Re: [PATCH v3 0/2] iov_iter: allow iov_iter_get_pages_alloc to allocate more pages per call Message-ID: <20170202105651.GA32111@infradead.org> References: <20170124212327.14517-1-jlayton@redhat.com> <20170125133205.21704-1-jlayton@redhat.com> <20170202095125.GF27291@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170202095125.GF27291@ZenIV.linux.org.uk> User-Agent: Mutt/1.7.1 (2016-10-04) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 02, 2017 at 09:51:25AM +0000, Al Viro wrote: > On Wed, Jan 25, 2017 at 08:32:03AM -0500, Jeff Layton wrote: > > Small respin of the patch that I sent yesterday for the same thing. > > > > This moves the maxsize handling into iov_iter_pvec_size, so that we don't > > end up iterating past the max size we'll use anyway when trying to > > determine the pagevec length. > > > > Also, a respun patch to make ceph use iov_iter_get_pages_alloc instead of > > trying to do it via its own routine. > > > > Al, if these look ok, do you want to pick these up or shall I ask > > Ilya to merge them via the ceph tree? > > I'd rather have that kind of work go through the vfs tree; said that, > I really wonder if this is the right approach. Most of the users of > iov_iter_get_pages()/iov_iter_get_pages_alloc() look like they want > something like > iov_iter_for_each_page(iter, size, f, data) > with int (*f)(struct page *page, size_t from, size_t size, void *data) > passed as callback. Not everything fits that model, but there's a whole > lot of things that do. I was planning to do that, mostly because of the iomap dio code that would not only get a lot cleaner with this, but also support multi-page bvecs that we hope to have in the block layer soon. The issue with it is that we need to touch all the arch get_user_pages_fast implementations, so it's going to be a relatively invasive change that I didn't want to fix with just introducing the new direct I/O code. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: Re: [PATCH v3 0/2] iov_iter: allow iov_iter_get_pages_alloc to allocate more pages per call Date: Thu, 2 Feb 2017 02:56:51 -0800 Message-ID: <20170202105651.GA32111@infradead.org> References: <20170124212327.14517-1-jlayton@redhat.com> <20170125133205.21704-1-jlayton@redhat.com> <20170202095125.GF27291@ZenIV.linux.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <20170202095125.GF27291-3bDd1+5oDREiFSDQTTA3OLVCufUGDwFn@public.gmane.org> Sender: linux-nfs-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Al Viro Cc: Jeff Layton , linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, lustre-devel-aLEFhgZF4x6X6Mz3xDxJMA@public.gmane.org, v9fs-developer-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, Linus Torvalds , Jan Kara , Chris Wilson , "Kirill A. Shutemov" List-Id: ceph-devel.vger.kernel.org On Thu, Feb 02, 2017 at 09:51:25AM +0000, Al Viro wrote: > On Wed, Jan 25, 2017 at 08:32:03AM -0500, Jeff Layton wrote: > > Small respin of the patch that I sent yesterday for the same thing. > > > > This moves the maxsize handling into iov_iter_pvec_size, so that we don't > > end up iterating past the max size we'll use anyway when trying to > > determine the pagevec length. > > > > Also, a respun patch to make ceph use iov_iter_get_pages_alloc instead of > > trying to do it via its own routine. > > > > Al, if these look ok, do you want to pick these up or shall I ask > > Ilya to merge them via the ceph tree? > > I'd rather have that kind of work go through the vfs tree; said that, > I really wonder if this is the right approach. Most of the users of > iov_iter_get_pages()/iov_iter_get_pages_alloc() look like they want > something like > iov_iter_for_each_page(iter, size, f, data) > with int (*f)(struct page *page, size_t from, size_t size, void *data) > passed as callback. Not everything fits that model, but there's a whole > lot of things that do. I was planning to do that, mostly because of the iomap dio code that would not only get a lot cleaner with this, but also support multi-page bvecs that we hope to have in the block layer soon. The issue with it is that we need to touch all the arch get_user_pages_fast implementations, so it's going to be a relatively invasive change that I didn't want to fix with just introducing the new direct I/O code. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Date: Thu, 2 Feb 2017 02:56:51 -0800 Subject: [lustre-devel] [PATCH v3 0/2] iov_iter: allow iov_iter_get_pages_alloc to allocate more pages per call In-Reply-To: <20170202095125.GF27291@ZenIV.linux.org.uk> References: <20170124212327.14517-1-jlayton@redhat.com> <20170125133205.21704-1-jlayton@redhat.com> <20170202095125.GF27291@ZenIV.linux.org.uk> Message-ID: <20170202105651.GA32111@infradead.org> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Al Viro Cc: Jeff Layton , linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, lustre-devel-aLEFhgZF4x6X6Mz3xDxJMA@public.gmane.org, v9fs-developer-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, Linus Torvalds , Jan Kara , Chris Wilson , "Kirill A. Shutemov" On Thu, Feb 02, 2017 at 09:51:25AM +0000, Al Viro wrote: > On Wed, Jan 25, 2017 at 08:32:03AM -0500, Jeff Layton wrote: > > Small respin of the patch that I sent yesterday for the same thing. > > > > This moves the maxsize handling into iov_iter_pvec_size, so that we don't > > end up iterating past the max size we'll use anyway when trying to > > determine the pagevec length. > > > > Also, a respun patch to make ceph use iov_iter_get_pages_alloc instead of > > trying to do it via its own routine. > > > > Al, if these look ok, do you want to pick these up or shall I ask > > Ilya to merge them via the ceph tree? > > I'd rather have that kind of work go through the vfs tree; said that, > I really wonder if this is the right approach. Most of the users of > iov_iter_get_pages()/iov_iter_get_pages_alloc() look like they want > something like > iov_iter_for_each_page(iter, size, f, data) > with int (*f)(struct page *page, size_t from, size_t size, void *data) > passed as callback. Not everything fits that model, but there's a whole > lot of things that do. I was planning to do that, mostly because of the iomap dio code that would not only get a lot cleaner with this, but also support multi-page bvecs that we hope to have in the block layer soon. The issue with it is that we need to touch all the arch get_user_pages_fast implementations, so it's going to be a relatively invasive change that I didn't want to fix with just introducing the new direct I/O code.