From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83944C433FE for ; Fri, 4 Feb 2022 20:22:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADDED8D0011; Fri, 4 Feb 2022 15:21:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F3FF8D0007; Fri, 4 Feb 2022 15:21:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8923E8D0011; Fri, 4 Feb 2022 15:21:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 71B128D0007 for ; Fri, 4 Feb 2022 15:21:43 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 301A118213814 for ; Fri, 4 Feb 2022 20:21:43 +0000 (UTC) X-FDA: 79106218086.07.7CFAC20 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 9308F20005 for ; Fri, 4 Feb 2022 20:21:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ftzBL1oQK/5m9sfLfLIA/LTM6FPP6XsHZDiqOCAGWzc=; b=H2dX/q11Q9ClCFy2/MvBToAJcV 9Md1MDr5aOYF3Au433b8O6hWakKwyCONAC45KIij72PrbAeLsrYXS4SNuWcO1F4do6wTphYv1Q+OP WH/TZDFFRTbx6OswNeLlAGXPRjJmd76o0gyfJYqz+YzQtqVow3QX4xthWX72yJjngXlSS3nu0RNyb jr2Lp+uRFfRCmf6+3vs2DKCD8EaAUuTl1rbndc6XyR/t27paPXvNickA8Dg4Oo6D2pCR3iUl9wili pHc8Csg88xI3PLXnrCxOsqG+wXFPhbfBOD5arrgorCsayn1wugTQ9kovyX5pPRdDhzjM8jL50+327 4NL4aYuA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4ja-007Lpl-CA; Fri, 04 Feb 2022 19:59:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 60/75] mm/rmap: Constify the rmap_walk_control argument Date: Fri, 4 Feb 2022 19:58:37 +0000 Message-Id: <20220204195852.1751729-61-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9308F20005 X-Stat-Signature: sph8f4go7nbggr5mbpgm9f6qyn8pybd9 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="H2dX/q11"; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1644006102-494330 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The rmap walking functions do not modify the rmap_walk_control, and page_idle_clear_pte_refs() takes advantage of that to move construction of the rmap_walk_control to compile time. This lets us remove an unclean cast. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/ksm.h | 4 ++-- include/linux/rmap.h | 4 ++-- mm/ksm.c | 2 +- mm/page_idle.c | 2 +- mm/rmap.c | 14 +++++++------- 5 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 0b4f17418f64..0630e545f4cb 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -51,7 +51,7 @@ static inline void ksm_exit(struct mm_struct *mm) struct page *ksm_might_need_to_copy(struct page *page, struct vm_area_struct *vma, unsigned long address); =20 -void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc); +void rmap_walk_ksm(struct folio *folio, const struct rmap_walk_control *= rwc); void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); =20 #else /* !CONFIG_KSM */ @@ -79,7 +79,7 @@ static inline struct page *ksm_might_need_to_copy(struc= t page *page, } =20 static inline void rmap_walk_ksm(struct folio *folio, - struct rmap_walk_control *rwc) + const struct rmap_walk_control *rwc) { } =20 diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 4e4c4412b295..96522944739e 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -294,8 +294,8 @@ struct rmap_walk_control { bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; =20 -void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc); -void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc= ); +void rmap_walk(struct folio *folio, const struct rmap_walk_control *rwc)= ; +void rmap_walk_locked(struct folio *folio, const struct rmap_walk_contro= l *rwc); =20 #else /* !CONFIG_MMU */ =20 diff --git a/mm/ksm.c b/mm/ksm.c index 0ec3d9035419..e95c454303a2 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2601,7 +2601,7 @@ struct page *ksm_might_need_to_copy(struct page *pa= ge, return new_page; } =20 -void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) +void rmap_walk_ksm(struct folio *folio, const struct rmap_walk_control *= rwc) { struct stable_node *stable_node; struct rmap_item *rmap_item; diff --git a/mm/page_idle.c b/mm/page_idle.c index 3563c3850795..982f35d91b96 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -107,7 +107,7 @@ static void page_idle_clear_pte_refs(struct page *pag= e) if (need_lock && !folio_trylock(folio)) return; =20 - rmap_walk(folio, (struct rmap_walk_control *)&rwc); + rmap_walk(folio, &rwc); =20 if (need_lock) folio_unlock(folio); diff --git a/mm/rmap.c b/mm/rmap.c index 1ade44970ab1..1d22cb825931 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2273,7 +2273,7 @@ void __put_anon_vma(struct anon_vma *anon_vma) } =20 static struct anon_vma *rmap_walk_anon_lock(struct folio *folio, - struct rmap_walk_control *rwc) + const struct rmap_walk_control *rwc) { struct anon_vma *anon_vma; =20 @@ -2308,8 +2308,8 @@ static struct anon_vma *rmap_walk_anon_lock(struct = folio *folio, * vm_flags for that VMA. That should be OK, because that vma shouldn't= be * LOCKED. */ -static void rmap_walk_anon(struct folio *folio, struct rmap_walk_control= *rwc, - bool locked) +static void rmap_walk_anon(struct folio *folio, + const struct rmap_walk_control *rwc, bool locked) { struct anon_vma *anon_vma; pgoff_t pgoff_start, pgoff_end; @@ -2361,8 +2361,8 @@ static void rmap_walk_anon(struct folio *folio, str= uct rmap_walk_control *rwc, * vm_flags for that VMA. That should be OK, because that vma shouldn't= be * LOCKED. */ -static void rmap_walk_file(struct folio *folio, struct rmap_walk_control= *rwc, - bool locked) +static void rmap_walk_file(struct folio *folio, + const struct rmap_walk_control *rwc, bool locked) { struct address_space *mapping =3D folio_mapping(folio); pgoff_t pgoff_start, pgoff_end; @@ -2404,7 +2404,7 @@ static void rmap_walk_file(struct folio *folio, str= uct rmap_walk_control *rwc, i_mmap_unlock_read(mapping); } =20 -void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc) +void rmap_walk(struct folio *folio, const struct rmap_walk_control *rwc) { if (unlikely(folio_test_ksm(folio))) rmap_walk_ksm(folio, rwc); @@ -2415,7 +2415,7 @@ void rmap_walk(struct folio *folio, struct rmap_wal= k_control *rwc) } =20 /* Like rmap_walk, but caller holds relevant rmap lock */ -void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc= ) +void rmap_walk_locked(struct folio *folio, const struct rmap_walk_contro= l *rwc) { /* no ksm support for now */ VM_BUG_ON_FOLIO(folio_test_ksm(folio), folio); --=20 2.34.1