From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CC2AC432C0 for ; Mon, 25 Nov 2019 08:59:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6F8F320815 for ; Mon, 25 Nov 2019 08:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727008AbfKYI7V (ORCPT ); Mon, 25 Nov 2019 03:59:21 -0500 Received: from mx2.suse.de ([195.135.220.15]:56606 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725793AbfKYI7V (ORCPT ); Mon, 25 Nov 2019 03:59:21 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F1475B308; Mon, 25 Nov 2019 08:59:16 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 1906B1E0A57; Mon, 25 Nov 2019 09:59:15 +0100 (CET) Date: Mon, 25 Nov 2019 09:59:15 +0100 From: Jan Kara To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML Subject: Re: [PATCH 17/19] powerpc: book3s64: convert to pin_user_pages() and put_user_page() Message-ID: <20191125085915.GB1797@quack2.suse.cz> References: <20191125042011.3002372-1-jhubbard@nvidia.com> <20191125042011.3002372-18-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191125042011.3002372-18-jhubbard@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On Sun 24-11-19 20:20:09, John Hubbard wrote: > 1. Convert from get_user_pages() to pin_user_pages(). > > 2. As required by pin_user_pages(), release these pages via > put_user_page(). In this case, do so via put_user_pages_dirty_lock(). > > That has the side effect of calling set_page_dirty_lock(), instead > of set_page_dirty(). This is probably more accurate. > > As Christoph Hellwig put it, "set_page_dirty() is only safe if we are > dealing with a file backed page where we have reference on the inode it > hangs off." [1] > > 3. Release each page in mem->hpages[] (instead of mem->hpas[]), because > that is the array that pin_longterm_pages() filled in. This is more > accurate and should be a little safer from a maintenance point of > view. Except that this breaks the code. hpages is unioned with hpas... > [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de > > Signed-off-by: John Hubbard > --- > arch/powerpc/mm/book3s64/iommu_api.c | 12 +++++------- > 1 file changed, 5 insertions(+), 7 deletions(-) > > diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c > index 56cc84520577..196383e8e5a9 100644 > --- a/arch/powerpc/mm/book3s64/iommu_api.c > +++ b/arch/powerpc/mm/book3s64/iommu_api.c > @@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, > for (entry = 0; entry < entries; entry += chunk) { > unsigned long n = min(entries - entry, chunk); > > - ret = get_user_pages(ua + (entry << PAGE_SHIFT), n, > + ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, > FOLL_WRITE | FOLL_LONGTERM, > mem->hpages + entry, NULL); > if (ret == n) { > @@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, > return 0; > > free_exit: > - /* free the reference taken */ > - for (i = 0; i < pinned; i++) > - put_page(mem->hpages[i]); > + /* free the references taken */ > + put_user_pages(mem->hpages, pinned); > > vfree(mem->hpas); > kfree(mem); > @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) > if (!page) > continue; > > - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) > - SetPageDirty(page); > + put_user_pages_dirty_lock(&mem->hpages[i], 1, > + MM_IOMMU_TABLE_GROUP_PAGE_DIRTY); And the dirtying condition is wrong here as well. Currently it is always true. Honza -- Jan Kara SUSE Labs, CR