From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51575C43381 for ; Fri, 8 Mar 2019 21:36:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0C87D20652 for ; Fri, 8 Mar 2019 21:36:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fs1CsZCl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726727AbfCHVgu (ORCPT ); Fri, 8 Mar 2019 16:36:50 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:46273 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726418AbfCHVgq (ORCPT ); Fri, 8 Mar 2019 16:36:46 -0500 Received: by mail-pf1-f195.google.com with SMTP id g6so15046210pfh.13; Fri, 08 Mar 2019 13:36:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kDKTJmGRrB2DVmVkF369QsCrmEvxOyKIOnMGmStFbjc=; b=fs1CsZClBj/tPusa5tzCdpVn9k4H831T/6g0t2ayB41nLJFQYyQKkY3OSx4jbis2FT VZxqNzdFlqhM4K2niXutB9uVA2ugJpJgsQKExfM4d+53RVYsl6dVRyj/jSrrXRn3DuLO 1d+v8UrIIWEvGusGNWYVsTP0prolwNn8XkQgDGNaWsRLwoT2t9fn3jur3rRNRoOH2B9F YJq07WHRFZmeGrzcabAym1Jqs7ANZz9utF4WxaUXOFWdlBX/V/GRZNj6JnsVIKG0hB0t g4TmRGOe/ZUnWo+nR1czQaDxeT3bdgmw2wklm0LDBe0AZ5lOiuAZno6T+WrQZqIm4ZPn jb3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kDKTJmGRrB2DVmVkF369QsCrmEvxOyKIOnMGmStFbjc=; b=oD34ILnHxloUJtAlamuQLitwpgvloXp+L51hrjSIy82N2AzvUnFdo0tZjKefsIetF0 Enc2iOCSmg/+bvzp5Ki4d4WVSEfx2ffjfI55vPPpPjg/UyKlaYrf6oJfUDLdcCgpfpaZ CgX/x8mt4E+g+l5qLTkUhH9uj/ux8BUuAgZPGvCBc518Bi7FWPmhDH94PzBwedrJB0be 4uksOToiJ7UdO9/gFI9hNwqttLSD+DcyCsYpmmFbSdfr9/256jhMRls91e4G6x6dLcwu SymhhhFUY+b9byGVAwvSha6a4yR3+zp7fu+rgS+LrSZ6ogNV3/Ei6dCbX9N1y+xf8csy aLlw== X-Gm-Message-State: APjAAAV7FptU+ZiRrXfRm1c0nmNWiNzkCR6fsDA5+cOH0VWmLwQVvmUl zwemGP7lLDUlOFFhEdDzTrU= X-Google-Smtp-Source: APXvYqwBT3c+yxcqZogCJP6Pgjl6XxolMZFjfPf2ee4HvtbNq06N5dzwgW5FdnBCSeaa20T/qf19qQ== X-Received: by 2002:a17:902:59c1:: with SMTP id d1mr20579787plj.324.1552081005077; Fri, 08 Mar 2019 13:36:45 -0800 (PST) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id c2sm11803665pfd.159.2019.03.08.13.36.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 Mar 2019 13:36:44 -0800 (PST) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton , linux-mm@kvack.org Cc: Al Viro , Christian Benvenuti , Christoph Hellwig , Christopher Lameter , Dan Williams , Dave Chinner , Dennis Dalessandro , Doug Ledford , Ira Weiny , Jan Kara , Jason Gunthorpe , Jerome Glisse , Matthew Wilcox , Michal Hocko , Mike Rapoport , Mike Marciniszyn , Ralph Campbell , Tom Talpey , LKML , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH v4 1/1] mm: introduce put_user_page*(), placeholder versions Date: Fri, 8 Mar 2019 13:36:33 -0800 Message-Id: <20190308213633.28978-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308213633.28978-1-jhubbard@nvidia.com> References: <20190308213633.28978-1-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-NVConfidentiality: public Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: John Hubbard Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also introduces put_user_pages(), and a few dirty/locked variations, as a replacement for release_pages(), and also as a replacement for open-coded loops that release multiple pages. These may be used for subsequent performance improvements, via batching of pages to be released. This is the first step of fixing a problem (also described in [1] and [2]) with interactions between get_user_pages ("gup") and filesystems. Problem description: let's start with a bug report. Below, is what happens sometimes, under memory pressure, when a driver pins some pages via gup, and then marks those pages dirty, and releases them. Note that the gup documentation actually recommends that pattern. The problem is that the filesystem may do a writeback while the pages were gup-pinned, and then the filesystem believes that the pages are clean. So, when the driver later marks the pages as dirty, that conflicts with the filesystem's page tracking and results in a BUG(), like this one that I experienced: kernel BUG at /build/linux-fQ94TU/linux-4.4.0/fs/ext4/inode.c:1899! backtrace: ext4_writepage __writepage write_cache_pages ext4_writepages do_writepages __writeback_single_inode writeback_sb_inodes __writeback_inodes_wb wb_writeback wb_workfn process_one_work worker_thread kthread ret_from_fork ...which is due to the file system asserting that there are still buffer heads attached: ({ \ BUG_ON(!PagePrivate(page)); \ ((struct buffer_head *)page_private(page)); \ }) Dave Chinner's description of this is very clear: "The fundamental issue is that ->page_mkwrite must be called on every write access to a clean file backed page, not just the first one. How long the GUP reference lasts is irrelevant, if the page is clean and you need to dirty it, you must call ->page_mkwrite before it is marked writeable and dirtied. Every. Time." This is just one symptom of the larger design problem: real filesystems that actually write to a backing device, do not actually support get_user_pages() being called on their pages, and letting hardware write directly to those pages--even though that pattern has been going on since about 2005 or so. The steps are to fix it are: 1) (This patch): provide put_user_page*() routines, intended to be used for releasing pages that were pinned via get_user_pages*(). 2) Convert all of the call sites for get_user_pages*(), to invoke put_user_page*(), instead of put_page(). This involves dozens of call sites, and will take some time. 3) After (2) is complete, use get_user_pages*() and put_user_page*() to implement tracking of these pages. This tracking will be separate from the existing struct page refcounting. 4) Use the tracking and identification of these pages, to implement special handling (especially in writeback paths) when the pages are backed by a filesystem. [1] https://lwn.net/Articles/774411/ : "DMA and get_user_pages()" [2] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" Cc: Al Viro Cc: Christoph Hellwig Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Chinner Cc: Ira Weiny Cc: Jan Kara Cc: Jason Gunthorpe Cc: Jerome Glisse Cc: Matthew Wilcox Cc: Michal Hocko Cc: Mike Rapoport Cc: Ralph Campbell Reviewed-by: Jan Kara Reviewed-by: Mike Rapoport # docs Reviewed-by: Ira Weiny Reviewed-by: Jérôme Glisse Signed-off-by: John Hubbard --- include/linux/mm.h | 24 ++++++++++++++ mm/gup.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 106 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5801ee849f36..353035c8b115 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -993,6 +993,30 @@ static inline void put_page(struct page *page) __put_page(page); } +/** + * put_user_page() - release a gup-pinned page + * @page: pointer to page to be released + * + * Pages that were pinned via get_user_pages*() must be released via + * either put_user_page(), or one of the put_user_pages*() routines + * below. This is so that eventually, pages that are pinned via + * get_user_pages*() can be separately tracked and uniquely handled. In + * particular, interactions with RDMA and filesystems need special + * handling. + * + * put_user_page() and put_page() are not interchangeable, despite this early + * implementation that makes them look the same. put_user_page() calls must + * be perfectly matched up with get_user_page() calls. + */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +void put_user_pages_dirty(struct page **pages, unsigned long npages); +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +void put_user_pages(struct page **pages, unsigned long npages); + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif diff --git a/mm/gup.c b/mm/gup.c index f84e22685aaa..37085b8163b1 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -28,6 +28,88 @@ struct follow_page_context { unsigned int page_mask; }; +typedef int (*set_dirty_func_t)(struct page *page); + +static void __put_user_pages_dirty(struct page **pages, + unsigned long npages, + set_dirty_func_t sdf) +{ + unsigned long index; + + for (index = 0; index < npages; index++) { + struct page *page = compound_head(pages[index]); + + if (!PageDirty(page)) + sdf(page); + + put_user_page(page); + } +} + +/** + * put_user_pages_dirty() - release and dirty an array of gup-pinned pages + * @pages: array of pages to be marked dirty and released. + * @npages: number of pages in the @pages array. + * + * "gup-pinned page" refers to a page that has had one of the get_user_pages() + * variants called on that page. + * + * For each page in the @pages array, make that page (or its head page, if a + * compound page) dirty, if it was previously listed as clean. Then, release + * the page using put_user_page(). + * + * Please see the put_user_page() documentation for details. + * + * set_page_dirty(), which does not lock the page, is used here. + * Therefore, it is the caller's responsibility to ensure that this is + * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * + */ +void put_user_pages_dirty(struct page **pages, unsigned long npages) +{ + __put_user_pages_dirty(pages, npages, set_page_dirty); +} +EXPORT_SYMBOL(put_user_pages_dirty); + +/** + * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages + * @pages: array of pages to be marked dirty and released. + * @npages: number of pages in the @pages array. + * + * For each page in the @pages array, make that page (or its head page, if a + * compound page) dirty, if it was previously listed as clean. Then, release + * the page using put_user_page(). + * + * Please see the put_user_page() documentation for details. + * + * This is just like put_user_pages_dirty(), except that it invokes + * set_page_dirty_lock(), instead of set_page_dirty(). + * + */ +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) +{ + __put_user_pages_dirty(pages, npages, set_page_dirty_lock); +} +EXPORT_SYMBOL(put_user_pages_dirty_lock); + +/** + * put_user_pages() - release an array of gup-pinned pages. + * @pages: array of pages to be marked dirty and released. + * @npages: number of pages in the @pages array. + * + * For each page in the @pages array, release the page using put_user_page(). + * + * Please see the put_user_page() documentation for details. + */ +void put_user_pages(struct page **pages, unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) + put_user_page(pages[index]); +} +EXPORT_SYMBOL(put_user_pages); + static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags) { -- 2.21.0