From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9CEEC33CB9 for ; Fri, 29 Nov 2019 11:23:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A0EB82084D for ; Fri, 29 Nov 2019 11:23:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726785AbfK2LXW (ORCPT ); Fri, 29 Nov 2019 06:23:22 -0500 Received: from mx2.suse.de ([195.135.220.15]:35050 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725892AbfK2LXW (ORCPT ); Fri, 29 Nov 2019 06:23:22 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EA9F5AC82; Fri, 29 Nov 2019 11:23:17 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 4E8A01E0B6A; Fri, 29 Nov 2019 12:23:15 +0100 (CET) Date: Fri, 29 Nov 2019 12:23:15 +0100 From: Jan Kara To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML Subject: Re: [PATCH v2 17/19] powerpc: book3s64: convert to pin_user_pages() and put_user_page() Message-ID: <20191129112315.GB1121@quack2.suse.cz> References: <20191125231035.1539120-1-jhubbard@nvidia.com> <20191125231035.1539120-18-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191125231035.1539120-18-jhubbard@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Mon 25-11-19 15:10:33, John Hubbard wrote: > 1. Convert from get_user_pages() to pin_user_pages(). > > 2. As required by pin_user_pages(), release these pages via > put_user_page(). In this case, do so via put_user_pages_dirty_lock(). > > That has the side effect of calling set_page_dirty_lock(), instead > of set_page_dirty(). This is probably more accurate. Maybe more accurate but it doesn't work for mm_iommu_unpin(). As I'm checking mm_iommu_unpin() gets called from RCU callback which is executed interrupt context and you cannot lock pages from such context. So you need to queue work from the RCU callback and then do the real work from the workqueue... Honza > > As Christoph Hellwig put it, "set_page_dirty() is only safe if we are > dealing with a file backed page where we have reference on the inode it > hangs off." [1] > > [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de > > Cc: Jan Kara > Signed-off-by: John Hubbard > --- > arch/powerpc/mm/book3s64/iommu_api.c | 12 +++++------- > 1 file changed, 5 insertions(+), 7 deletions(-) > > diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c > index 56cc84520577..fc1670a6fc3c 100644 > --- a/arch/powerpc/mm/book3s64/iommu_api.c > +++ b/arch/powerpc/mm/book3s64/iommu_api.c > @@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, > for (entry = 0; entry < entries; entry += chunk) { > unsigned long n = min(entries - entry, chunk); > > - ret = get_user_pages(ua + (entry << PAGE_SHIFT), n, > + ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, > FOLL_WRITE | FOLL_LONGTERM, > mem->hpages + entry, NULL); > if (ret == n) { > @@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, > return 0; > > free_exit: > - /* free the reference taken */ > - for (i = 0; i < pinned; i++) > - put_page(mem->hpages[i]); > + /* free the references taken */ > + put_user_pages(mem->hpages, pinned); > > vfree(mem->hpas); > kfree(mem); > @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) > if (!page) > continue; > > - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) > - SetPageDirty(page); > + put_user_pages_dirty_lock(&page, 1, > + mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY); > > - put_page(page); > mem->hpas[i] = 0; > } > } > -- > 2.24.0 > -- Jan Kara SUSE Labs, CR