From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D90C9C10F14 for ; Tue, 8 Oct 2019 09:15:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8FFA9206C2 for ; Tue, 8 Oct 2019 09:15:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="Mu5RKinx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8FFA9206C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AC9B68E000B; Tue, 8 Oct 2019 05:15:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A79FF8E0006; Tue, 8 Oct 2019 05:15:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87F178E000B; Tue, 8 Oct 2019 05:15:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 5CA558E0006 for ; Tue, 8 Oct 2019 05:15:30 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 01FCF2BFC8 for ; Tue, 8 Oct 2019 09:15:30 +0000 (UTC) X-FDA: 76020059220.16.gun89_12c662c9ee726 X-HE-Tag: gun89_12c662c9ee726 X-Filterd-Recvd-Size: 9415 Received: from pio-pvt-msa3.bahnhof.se (pio-pvt-msa3.bahnhof.se [79.136.2.42]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Oct 2019 09:15:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by pio-pvt-msa3.bahnhof.se (Postfix) with ESMTP id A2D6E3F503; Tue, 8 Oct 2019 11:15:22 +0200 (CEST) Authentication-Results: pio-pvt-msa3.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=Mu5RKinx; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se Received: from pio-pvt-msa3.bahnhof.se ([127.0.0.1]) by localhost (pio-pvt-msa3.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mp64gV4EXqLH; Tue, 8 Oct 2019 11:15:19 +0200 (CEST) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by pio-pvt-msa3.bahnhof.se (Postfix) with ESMTPA id 1C3EE3F38D; Tue, 8 Oct 2019 11:15:18 +0200 (CEST) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id CDE823605F2; Tue, 8 Oct 2019 11:15:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1570526117; bh=hXvP5dhzYv9KPG1BwBx+8Jo6RS1I+93ap3Ix3m9ksOI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mu5RKinxG23wx3lTcOpOtj9Yex7ZImSrlt+2kdvrGQFoMVCS47zpt961XJVt8PdCe VII72n4oDv85MUhYMonDb8uXPoy3GJwRZOLjVFZVu2Ml8FsTIjFa/hZOsYkw7z8BjK k79XWzkeBleFeq+3Dp+Qt9xX6iyOIJKxooLGs/+0= From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m=20=28VMware=29?= To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: torvalds@linux-foundation.org, Thomas Hellstrom , Andrew Morton , Matthew Wilcox , Will Deacon , Peter Zijlstra , Rik van Riel , Minchan Kim , Michal Hocko , Huang Ying , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A . Shutemov" Subject: [PATCH v4 4/9] mm: Add a walk_page_mapping() function to the pagewalk code Date: Tue, 8 Oct 2019 11:15:03 +0200 Message-Id: <20191008091508.2682-5-thomas_os@shipmail.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191008091508.2682-1-thomas_os@shipmail.org> References: <20191008091508.2682-1-thomas_os@shipmail.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Thomas Hellstrom For users that want to travers all page table entries pointing into a region of a struct address_space mapping, introduce a walk_page_mapping() function. The walk_page_mapping() function will be initially be used for dirty- tracking in virtual graphics drivers. Cc: Andrew Morton Cc: Matthew Wilcox Cc: Will Deacon Cc: Peter Zijlstra Cc: Rik van Riel Cc: Minchan Kim Cc: Michal Hocko Cc: Huang Ying Cc: J=C3=A9r=C3=B4me Glisse Cc: Kirill A. Shutemov Signed-off-by: Thomas Hellstrom --- include/linux/pagewalk.h | 9 ++++ mm/pagewalk.c | 94 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 102 insertions(+), 1 deletion(-) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index c4a013eb445d..9b2742911eff 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -32,6 +32,9 @@ struct mm_walk; * "do page table walk over the current vma", returning * a negative value means "abort current page table walk * right now" and returning 1 means "skip the current vma" + * @pre_vma: if set, called before starting walk on a non-nul= l vma. + * @post_vma: if set, called after a walk on a non-null vma, p= rovided + * that @pre_vma and the vma walk succeeded. */ struct mm_walk_ops { int (*pud_entry)(pud_t *pud, unsigned long addr, @@ -47,6 +50,9 @@ struct mm_walk_ops { struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*pre_vma)(unsigned long start, unsigned long end, + struct mm_walk *walk); + void (*post_vma)(struct mm_walk *walk); }; =20 /** @@ -70,5 +76,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long= start, void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *= ops, void *private); +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , + pgoff_t nr, const struct mm_walk_ops *ops, + void *private); =20 #endif /* _LINUX_PAGEWALK_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index f844c2a2aa60..8de4574e7753 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -262,13 +262,23 @@ static int __walk_page_range(unsigned long start, u= nsigned long end, { int err =3D 0; struct vm_area_struct *vma =3D walk->vma; + const struct mm_walk_ops *ops =3D walk->ops; + + if (vma && ops->pre_vma) { + err =3D ops->pre_vma(start, end, walk); + if (err) + return err; + } =20 if (vma && is_vm_hugetlb_page(vma)) { - if (walk->ops->hugetlb_entry) + if (ops->hugetlb_entry) err =3D walk_hugetlb_range(start, end, walk); } else err =3D walk_pgd_range(start, end, walk); =20 + if (vma && ops->post_vma) + ops->post_vma(walk); + return err; } =20 @@ -305,6 +315,11 @@ static int __walk_page_range(unsigned long start, un= signed long end, * its vm_flags. walk_page_test() and @ops->test_walk() are used for thi= s * purpose. * + * If operations need to be staged before and committed after a vma is w= alked, + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma= (), + * since it is intended to handle commit-type operations, can't return a= ny + * errors. + * * struct mm_walk keeps current values of some common data like vma and = pmd, * which are useful for the access from callbacks. If you want to pass s= ome * caller-specific data to callbacks, @private should be helpful. @@ -391,3 +406,80 @@ int walk_page_vma(struct vm_area_struct *vma, const = struct mm_walk_ops *ops, return err; return __walk_page_range(vma->vm_start, vma->vm_end, &walk); } + +/** + * walk_page_mapping - walk all memory areas mapped into a struct addres= s_space. + * @mapping: Pointer to the struct address_space + * @first_index: First page offset in the address_space + * @nr: Number of incremental page offsets to cover + * @ops: operation to call during the walk + * @private: private data for callbacks' usage + * + * This function walks all memory areas mapped into a struct address_spa= ce. + * The walk is limited to only the given page-size index range, but if + * the index boundaries cross a huge page-table entry, that entry will b= e + * included. + * + * Also see walk_page_range() for additional information. + * + * Locking: + * This function can't require that the struct mm_struct::mmap_sem is = held, + * since @mapping may be mapped by multiple processes. Instead + * @mapping->i_mmap_rwsem must be held. This might have implications i= n the + * callbacks, and it's up tho the caller to ensure that the + * struct mm_struct::mmap_sem is not needed. + * + * Also this means that a caller can't rely on the struct + * vm_area_struct::vm_flags to be constant across a call, + * except for immutable flags. Callers requiring this shouldn't use + * this function. + * + * Return: 0 on success, negative error code on failure, positive number= on + * caller defined premature termination. + */ +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , + pgoff_t nr, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk =3D { + .ops =3D ops, + .private =3D private, + }; + struct vm_area_struct *vma; + pgoff_t vba, vea, cba, cea; + unsigned long start_addr, end_addr; + int err =3D 0; + + lockdep_assert_held(&mapping->i_mmap_rwsem); + vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, + first_index + nr - 1) { + /* Clip to the vma */ + vba =3D vma->vm_pgoff; + vea =3D vba + vma_pages(vma); + cba =3D first_index; + cba =3D max(cba, vba); + cea =3D first_index + nr; + cea =3D min(cea, vea); + + start_addr =3D ((cba - vba) << PAGE_SHIFT) + vma->vm_start; + end_addr =3D ((cea - vba) << PAGE_SHIFT) + vma->vm_start; + if (start_addr >=3D end_addr) + continue; + + walk.vma =3D vma; + walk.mm =3D vma->vm_mm; + + err =3D walk_page_test(vma->vm_start, vma->vm_end, &walk); + if (err > 0) { + err =3D 0; + break; + } else if (err < 0) + break; + + err =3D __walk_page_range(start_addr, end_addr, &walk); + if (err) + break; + } + + return err; +} --=20 2.21.0