From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AACCC32754 for ; Fri, 2 Aug 2019 14:24:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 21E7420679 for ; Fri, 2 Aug 2019 14:24:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="G2sxycFA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390766AbfHBOYt (ORCPT ); Fri, 2 Aug 2019 10:24:49 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:52950 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731067AbfHBOYt (ORCPT ); Fri, 2 Aug 2019 10:24:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=jvuzp+6FCROkRR+waxcd/xSBD2jk0LdlEwwlmE/QRVo=; b=G2sxycFAx+YSXemaZWFj5rBoO L5W+suRd87Fzi03i9UbUkz+K/l4OLA7u4vdCzpFKd8kbKkpNT5POJKTjl3Y9NgF+IfuEQiw+Ya9sZ Uz98GhfWAglGYt98RBvfBnca/15T9G/n1hhy4hthNjlrIrfsKUlfX1Lr9suoRAYQ/pICfFJo13kmR L+hbvfVY5BZygvFEilaamnzYAdT/2P4dTNkPPWEPWi3WZyJpk4mE3pZYxvO4hCe5QB9xougEB5c3g IV0U5HMFOk+fqAqk6ui4QqQV5LiDqQxBtFq8kG3bamWa6AZ5dtGBvDFf6uxZiiuQmelcNvOe6qNsg wQg03COuw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1htYU7-0007La-VU; Fri, 02 Aug 2019 14:24:43 +0000 Date: Fri, 2 Aug 2019 07:24:43 -0700 From: Matthew Wilcox To: Jan Kara Cc: Michal Hocko , john.hubbard@gmail.com, Andrew Morton , Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jason Gunthorpe , =?iso-8859-1?B?Suly9G1l?= Glisse , LKML , amd-gfx@lists.freedesktop.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, devel@lists.orangefs.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, linux-xfs@vger.kernel.org, netdev@vger.kernel.org, rds-devel@oss.oracle.com, sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, John Hubbard Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites Message-ID: <20190802142443.GB5597@bombadil.infradead.org> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190802124146.GL25064@quack2.suse.cz> User-Agent: Mutt/1.11.4 (2019-03-13) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: > On Fri 02-08-19 11:12:44, Michal Hocko wrote: > > On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: > > [...] > > > 2) Convert all of the call sites for get_user_pages*(), to > > > invoke put_user_page*(), instead of put_page(). This involves dozens of > > > call sites, and will take some time. > > > > How do we make sure this is the case and it will remain the case in the > > future? There must be some automagic to enforce/check that. It is simply > > not manageable to do it every now and then because then 3) will simply > > be never safe. > > > > Have you considered coccinele or some other scripted way to do the > > transition? I have no idea how to deal with future changes that would > > break the balance though. > > Yeah, that's why I've been suggesting at LSF/MM that we may need to create > a gup wrapper - say vaddr_pin_pages() - and track which sites dropping > references got converted by using this wrapper instead of gup. The > counterpart would then be more logically named as unpin_page() or whatever > instead of put_user_page(). Sure this is not completely foolproof (you can > create new callsite using vaddr_pin_pages() and then just drop refs using > put_page()) but I suppose it would be a high enough barrier for missed > conversions... Thoughts? I think the API we really need is get_user_bvec() / put_user_bvec(), and I know Christoph has been putting some work into that. That avoids doing refcount operations on hundreds of pages if the page in question is a huge page. Once people are switched over to that, they won't be tempted to manually call put_page() on the individual constituent pages of a bvec. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Date: Fri, 02 Aug 2019 14:24:43 +0000 Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites Message-Id: <20190802142443.GB5597@bombadil.infradead.org> List-Id: References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> In-Reply-To: <20190802124146.GL25064@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Jan Kara Cc: linux-fbdev@vger.kernel.org, kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, Michal Hocko , linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , Dan Williams , devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, john.hubbard@gmail.com, linux-block@vger.kernel.org, =?iso-8859-1?B?Suly9G1l?= Glisse , linux-rpi-kernel@lists.infradead.org, ceph-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.o On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: > On Fri 02-08-19 11:12:44, Michal Hocko wrote: > > On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: > > [...] > > > 2) Convert all of the call sites for get_user_pages*(), to > > > invoke put_user_page*(), instead of put_page(). This involves dozens of > > > call sites, and will take some time. > > > > How do we make sure this is the case and it will remain the case in the > > future? There must be some automagic to enforce/check that. It is simply > > not manageable to do it every now and then because then 3) will simply > > be never safe. > > > > Have you considered coccinele or some other scripted way to do the > > transition? I have no idea how to deal with future changes that would > > break the balance though. > > Yeah, that's why I've been suggesting at LSF/MM that we may need to create > a gup wrapper - say vaddr_pin_pages() - and track which sites dropping > references got converted by using this wrapper instead of gup. The > counterpart would then be more logically named as unpin_page() or whatever > instead of put_user_page(). Sure this is not completely foolproof (you can > create new callsite using vaddr_pin_pages() and then just drop refs using > put_page()) but I suppose it would be a high enough barrier for missed > conversions... Thoughts? I think the API we really need is get_user_bvec() / put_user_bvec(), and I know Christoph has been putting some work into that. That avoids doing refcount operations on hundreds of pages if the page in question is a huge page. Once people are switched over to that, they won't be tempted to manually call put_page() on the individual constituent pages of a bvec. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites Date: Fri, 2 Aug 2019 07:24:43 -0700 Message-ID: <20190802142443.GB5597@bombadil.infradead.org> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20190802124146.GL25064@quack2.suse.cz> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane.org@lists.infradead.org To: Jan Kara Cc: linux-fbdev@vger.kernel.org, kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, Michal Hocko , linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , Dan Williams , devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, john.hubbard@gmail.com, linux-block@vger.kernel.org, =?iso-8859-1?B?Suly9G1l?= Glisse , linux-rpi-kernel@lists.infradead.org, ceph-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.o List-Id: ceph-devel.vger.kernel.org On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: > On Fri 02-08-19 11:12:44, Michal Hocko wrote: > > On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: > > [...] > > > 2) Convert all of the call sites for get_user_pages*(), to > > > invoke put_user_page*(), instead of put_page(). This involves dozens of > > > call sites, and will take some time. > > > > How do we make sure this is the case and it will remain the case in the > > future? There must be some automagic to enforce/check that. It is simply > > not manageable to do it every now and then because then 3) will simply > > be never safe. > > > > Have you considered coccinele or some other scripted way to do the > > transition? I have no idea how to deal with future changes that would > > break the balance though. > > Yeah, that's why I've been suggesting at LSF/MM that we may need to create > a gup wrapper - say vaddr_pin_pages() - and track which sites dropping > references got converted by using this wrapper instead of gup. The > counterpart would then be more logically named as unpin_page() or whatever > instead of put_user_page(). Sure this is not completely foolproof (you can > create new callsite using vaddr_pin_pages() and then just drop refs using > put_page()) but I suppose it would be a high enough barrier for missed > conversions... Thoughts? I think the API we really need is get_user_bvec() / put_user_bvec(), and I know Christoph has been putting some work into that. That avoids doing refcount operations on hundreds of pages if the page in question is a huge page. Once people are switched over to that, they won't be tempted to manually call put_page() on the individual constituent pages of a bvec. From mboxrd@z Thu Jan 1 00:00:00 1970 From: willy@infradead.org (Matthew Wilcox) Date: Fri, 2 Aug 2019 07:24:43 -0700 Subject: [PATCH 00/34] put_user_pages(): miscellaneous call sites In-Reply-To: <20190802124146.GL25064@quack2.suse.cz> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> Message-ID: <20190802142443.GB5597@bombadil.infradead.org> List-Id: Linux Driver Project Developer List On Fri, Aug 02, 2019@02:41:46PM +0200, Jan Kara wrote: > On Fri 02-08-19 11:12:44, Michal Hocko wrote: > > On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: > > [...] > > > 2) Convert all of the call sites for get_user_pages*(), to > > > invoke put_user_page*(), instead of put_page(). This involves dozens of > > > call sites, and will take some time. > > > > How do we make sure this is the case and it will remain the case in the > > future? There must be some automagic to enforce/check that. It is simply > > not manageable to do it every now and then because then 3) will simply > > be never safe. > > > > Have you considered coccinele or some other scripted way to do the > > transition? I have no idea how to deal with future changes that would > > break the balance though. > > Yeah, that's why I've been suggesting at LSF/MM that we may need to create > a gup wrapper - say vaddr_pin_pages() - and track which sites dropping > references got converted by using this wrapper instead of gup. The > counterpart would then be more logically named as unpin_page() or whatever > instead of put_user_page(). Sure this is not completely foolproof (you can > create new callsite using vaddr_pin_pages() and then just drop refs using > put_page()) but I suppose it would be a high enough barrier for missed > conversions... Thoughts? I think the API we really need is get_user_bvec() / put_user_bvec(), and I know Christoph has been putting some work into that. That avoids doing refcount operations on hundreds of pages if the page in question is a huge page. Once people are switched over to that, they won't be tempted to manually call put_page() on the individual constituent pages of a bvec. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AF6BC32750 for ; Fri, 2 Aug 2019 14:24:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D27BE20679 for ; Fri, 2 Aug 2019 14:24:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="M2IKymwS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D27BE20679 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6vUVuCH88Cq/cjKl7bUtAZnLRndW/CiNnyj4x7rUnT0=; b=M2IKymwS7a20d2 hAr3UlFmL4wqTLuwCIOlNYoX6aOfLLG+ewt8Q7W09q65SmKEoDJ6N4JhCj+3CeYqMtaLhOalU0jlz 05L1Ix32s55nJORcj/7h65VcZ3ImOAXJ7f1WZ0auDOW2YKGCjsNz34+0tpugVsEBfTKQy5bAb1Cmk Sn05XxE9GpQkzY2hrnh+at9zMgdk9VtizY7wOgGKaY7ZebUDUoZTs171NCRvGSrWg1FHI0dwH46kl /XNjFsSw5FdOM6GwrfAUQM53ZI9izEz7MkB0RcHCs+AEJCLhvibjxxQ52UvChmHDZUKNw6FzWeOS6 a1a7E5zRoVCqclHVhlzg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1htYU9-0007Lv-Ib; Fri, 02 Aug 2019 14:24:45 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1htYU7-0007La-VU; Fri, 02 Aug 2019 14:24:43 +0000 Date: Fri, 2 Aug 2019 07:24:43 -0700 From: Matthew Wilcox To: Jan Kara Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites Message-ID: <20190802142443.GB5597@bombadil.infradead.org> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20190802124146.GL25064@quack2.suse.cz> User-Agent: Mutt/1.11.4 (2019-03-13) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, Michal Hocko , linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , Dan Williams , devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, john.hubbard@gmail.com, linux-block@vger.kernel.org, =?iso-8859-1?B?Suly9G1l?= Glisse , linux-rpi-kernel@lists.infradead.org, ceph-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: > On Fri 02-08-19 11:12:44, Michal Hocko wrote: > > On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: > > [...] > > > 2) Convert all of the call sites for get_user_pages*(), to > > > invoke put_user_page*(), instead of put_page(). This involves dozens of > > > call sites, and will take some time. > > > > How do we make sure this is the case and it will remain the case in the > > future? There must be some automagic to enforce/check that. It is simply > > not manageable to do it every now and then because then 3) will simply > > be never safe. > > > > Have you considered coccinele or some other scripted way to do the > > transition? I have no idea how to deal with future changes that would > > break the balance though. > > Yeah, that's why I've been suggesting at LSF/MM that we may need to create > a gup wrapper - say vaddr_pin_pages() - and track which sites dropping > references got converted by using this wrapper instead of gup. The > counterpart would then be more logically named as unpin_page() or whatever > instead of put_user_page(). Sure this is not completely foolproof (you can > create new callsite using vaddr_pin_pages() and then just drop refs using > put_page()) but I suppose it would be a high enough barrier for missed > conversions... Thoughts? I think the API we really need is get_user_bvec() / put_user_bvec(), and I know Christoph has been putting some work into that. That avoids doing refcount operations on hundreds of pages if the page in question is a huge page. Once people are switched over to that, they won't be tempted to manually call put_page() on the individual constituent pages of a bvec. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29207C433FF for ; Fri, 2 Aug 2019 14:25:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ECA4320679 for ; Fri, 2 Aug 2019 14:25:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="G2sxycFA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECA4320679 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htYUP-0005TW-KC; Fri, 02 Aug 2019 14:25:01 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htYUO-0005TR-0z for xen-devel@lists.xenproject.org; Fri, 02 Aug 2019 14:25:00 +0000 X-Inumbo-ID: 4e454e03-b531-11e9-8980-bc764e045a96 Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 4e454e03-b531-11e9-8980-bc764e045a96; Fri, 02 Aug 2019 14:24:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=jvuzp+6FCROkRR+waxcd/xSBD2jk0LdlEwwlmE/QRVo=; b=G2sxycFAx+YSXemaZWFj5rBoO L5W+suRd87Fzi03i9UbUkz+K/l4OLA7u4vdCzpFKd8kbKkpNT5POJKTjl3Y9NgF+IfuEQiw+Ya9sZ Uz98GhfWAglGYt98RBvfBnca/15T9G/n1hhy4hthNjlrIrfsKUlfX1Lr9suoRAYQ/pICfFJo13kmR L+hbvfVY5BZygvFEilaamnzYAdT/2P4dTNkPPWEPWi3WZyJpk4mE3pZYxvO4hCe5QB9xougEB5c3g IV0U5HMFOk+fqAqk6ui4QqQV5LiDqQxBtFq8kG3bamWa6AZ5dtGBvDFf6uxZiiuQmelcNvOe6qNsg wQg03COuw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1htYU7-0007La-VU; Fri, 02 Aug 2019 14:24:43 +0000 Date: Fri, 2 Aug 2019 07:24:43 -0700 From: Matthew Wilcox To: Jan Kara Message-ID: <20190802142443.GB5597@bombadil.infradead.org> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20190802124146.GL25064@quack2.suse.cz> User-Agent: Mutt/1.11.4 (2019-03-13) Subject: Re: [Xen-devel] [PATCH 00/34] put_user_pages(): miscellaneous call sites X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, Michal Hocko , linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , Dan Williams , devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, john.hubbard@gmail.com, linux-block@vger.kernel.org, =?iso-8859-1?B?Suly9G1l?= Glisse , linux-rpi-kernel@lists.infradead.org, ceph-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" T24gRnJpLCBBdWcgMDIsIDIwMTkgYXQgMDI6NDE6NDZQTSArMDIwMCwgSmFuIEthcmEgd3JvdGU6 Cj4gT24gRnJpIDAyLTA4LTE5IDExOjEyOjQ0LCBNaWNoYWwgSG9ja28gd3JvdGU6Cj4gPiBPbiBU aHUgMDEtMDgtMTkgMTk6MTk6MzEsIGpvaG4uaHViYmFyZEBnbWFpbC5jb20gd3JvdGU6Cj4gPiBb Li4uXQo+ID4gPiAyKSBDb252ZXJ0IGFsbCBvZiB0aGUgY2FsbCBzaXRlcyBmb3IgZ2V0X3VzZXJf cGFnZXMqKCksIHRvCj4gPiA+IGludm9rZSBwdXRfdXNlcl9wYWdlKigpLCBpbnN0ZWFkIG9mIHB1 dF9wYWdlKCkuIFRoaXMgaW52b2x2ZXMgZG96ZW5zIG9mCj4gPiA+IGNhbGwgc2l0ZXMsIGFuZCB3 aWxsIHRha2Ugc29tZSB0aW1lLgo+ID4gCj4gPiBIb3cgZG8gd2UgbWFrZSBzdXJlIHRoaXMgaXMg dGhlIGNhc2UgYW5kIGl0IHdpbGwgcmVtYWluIHRoZSBjYXNlIGluIHRoZQo+ID4gZnV0dXJlPyBU aGVyZSBtdXN0IGJlIHNvbWUgYXV0b21hZ2ljIHRvIGVuZm9yY2UvY2hlY2sgdGhhdC4gSXQgaXMg c2ltcGx5Cj4gPiBub3QgbWFuYWdlYWJsZSB0byBkbyBpdCBldmVyeSBub3cgYW5kIHRoZW4gYmVj YXVzZSB0aGVuIDMpIHdpbGwgc2ltcGx5Cj4gPiBiZSBuZXZlciBzYWZlLgo+ID4gCj4gPiBIYXZl IHlvdSBjb25zaWRlcmVkIGNvY2NpbmVsZSBvciBzb21lIG90aGVyIHNjcmlwdGVkIHdheSB0byBk byB0aGUKPiA+IHRyYW5zaXRpb24/IEkgaGF2ZSBubyBpZGVhIGhvdyB0byBkZWFsIHdpdGggZnV0 dXJlIGNoYW5nZXMgdGhhdCB3b3VsZAo+ID4gYnJlYWsgdGhlIGJhbGFuY2UgdGhvdWdoLgo+IAo+ IFllYWgsIHRoYXQncyB3aHkgSSd2ZSBiZWVuIHN1Z2dlc3RpbmcgYXQgTFNGL01NIHRoYXQgd2Ug bWF5IG5lZWQgdG8gY3JlYXRlCj4gYSBndXAgd3JhcHBlciAtIHNheSB2YWRkcl9waW5fcGFnZXMo KSAtIGFuZCB0cmFjayB3aGljaCBzaXRlcyBkcm9wcGluZwo+IHJlZmVyZW5jZXMgZ290IGNvbnZl cnRlZCBieSB1c2luZyB0aGlzIHdyYXBwZXIgaW5zdGVhZCBvZiBndXAuIFRoZQo+IGNvdW50ZXJw YXJ0IHdvdWxkIHRoZW4gYmUgbW9yZSBsb2dpY2FsbHkgbmFtZWQgYXMgdW5waW5fcGFnZSgpIG9y IHdoYXRldmVyCj4gaW5zdGVhZCBvZiBwdXRfdXNlcl9wYWdlKCkuICBTdXJlIHRoaXMgaXMgbm90 IGNvbXBsZXRlbHkgZm9vbHByb29mICh5b3UgY2FuCj4gY3JlYXRlIG5ldyBjYWxsc2l0ZSB1c2lu ZyB2YWRkcl9waW5fcGFnZXMoKSBhbmQgdGhlbiBqdXN0IGRyb3AgcmVmcyB1c2luZwo+IHB1dF9w YWdlKCkpIGJ1dCBJIHN1cHBvc2UgaXQgd291bGQgYmUgYSBoaWdoIGVub3VnaCBiYXJyaWVyIGZv ciBtaXNzZWQKPiBjb252ZXJzaW9ucy4uLiBUaG91Z2h0cz8KCkkgdGhpbmsgdGhlIEFQSSB3ZSBy ZWFsbHkgbmVlZCBpcyBnZXRfdXNlcl9idmVjKCkgLyBwdXRfdXNlcl9idmVjKCksCmFuZCBJIGtu b3cgQ2hyaXN0b3BoIGhhcyBiZWVuIHB1dHRpbmcgc29tZSB3b3JrIGludG8gdGhhdC4gIFRoYXQg YXZvaWRzCmRvaW5nIHJlZmNvdW50IG9wZXJhdGlvbnMgb24gaHVuZHJlZHMgb2YgcGFnZXMgaWYg dGhlIHBhZ2UgaW4gcXVlc3Rpb24gaXMKYSBodWdlIHBhZ2UuICBPbmNlIHBlb3BsZSBhcmUgc3dp dGNoZWQgb3ZlciB0byB0aGF0LCB0aGV5IHdvbid0IGJlIHRlbXB0ZWQKdG8gbWFudWFsbHkgY2Fs bCBwdXRfcGFnZSgpIG9uIHRoZSBpbmRpdmlkdWFsIGNvbnN0aXR1ZW50IHBhZ2VzIG9mIGEgYnZl Yy4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnCmh0dHBzOi8v bGlzdHMueGVucHJvamVjdC5vcmcvbWFpbG1hbi9saXN0aW5mby94ZW4tZGV2ZWw=