From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0854AC10F1E for ; Tue, 20 Dec 2022 07:26:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233557AbiLTH0c (ORCPT ); Tue, 20 Dec 2022 02:26:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233564AbiLTHZm (ORCPT ); Tue, 20 Dec 2022 02:25:42 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AF10CD for ; Mon, 19 Dec 2022 23:25:39 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id x66so7935205pfx.3 for ; Mon, 19 Dec 2022 23:25:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nw94Wcad60kQlYesoGk9k3xA8YOcC6tDHrdqJDeUZPo=; b=lOCNCFWA/meHRThX8tIrSKBWtWjAtAFAyTPKlIuo8gru7rOJ1Pvq2cOSUhJWnYwZAG NA4typtr3ombOUIegnjNrJMPy96dI39GU/kqXDwBXKU6O6AZrdab21FuHKwTV50WR0AG DEWXzS5FvL9qGjCBlJcEYZ+pdC06SWcoxwGgj/LBSrItS/fBZkzEC2WclkyQvniUVxMN pYUGvbHtTMOF3+JJTaKVVd2J1V50tbGsShfgdJlCHyJYHvApYGLQ/g0u9YMsoJJYi88C xlznrMzHPtlQTh0YLGf3btaJN5LXFvogN1OrDAIhPXGVH9zQPZMJGSIUnnRAiz3kxzU5 JoYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nw94Wcad60kQlYesoGk9k3xA8YOcC6tDHrdqJDeUZPo=; b=eJDYTlYwPFRpGyhdIPf2GB8c86orND2PXIi5nj7WV8WzdikSZ8MVm4/KnAknh6YTqP /jTwko8D4f9J+PtJzfdW2hoCKfQb1aHcGojowGcRNka/hxd7H6A6WZlJ7IgAbjpvWq5x iw/pQfpdHIethOMsbEgcZj6mVUbtMUEqU/StAedVfYCz2xGrEScPwRzyHOvarNEebqoC BYhLL1S/pCZbuH4P6LIEkH3LoSDaPeZFPj+DQJ2SRIm7qi6dO409FMhory2V2AgAGQux 7HEqj9gVji3K9WMgpIIYh65mgAd8lv9lja5JK44PKzMDshmXG4JEcH1VfFZEkqTC/lXv hpXA== X-Gm-Message-State: ANoB5pk6/Nf9kefI1L8Txa7cQNPYrfYWYRj2w3V9aALwUzs5fN/I/rh6 TdK0Q9zitYlLRti/c9gElZM= X-Google-Smtp-Source: AA0mqf7n84R2otGmDntmZWp0jMee4xwTkCP+PYpGZJXN4hyRsMyg3xqv9rZ8VL6h/5SY66ibtNHinQ== X-Received: by 2002:aa7:87c3:0:b0:577:1857:57fa with SMTP id i3-20020aa787c3000000b00577185757famr43877192pfo.18.1671521138524; Mon, 19 Dec 2022 23:25:38 -0800 (PST) Received: from archlinux.localdomain ([140.121.198.213]) by smtp.googlemail.com with ESMTPSA id q15-20020aa7982f000000b00576f9773c80sm7865544pfl.206.2022.12.19.23.25.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Dec 2022 23:25:37 -0800 (PST) From: Chih-En Lin To: Andrew Morton , Qi Zheng , David Hildenbrand , Matthew Wilcox , Christophe Leroy , John Hubbard , Nadav Amit Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Yang Shi , Peter Xu , Zach O'Keefe , "Liam R . Howlett" , Alex Sierra , Xianting Tian , Colin Cross , Suren Baghdasaryan , Barry Song , Pasha Tatashin , Suleiman Souhlal , Brian Geffon , Yu Zhao , Tong Tiangen , Liu Shixin , Li kunyu , Anshuman Khandual , Vlastimil Babka , Hugh Dickins , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , "Eric W . Biederman" , Thomas Gleixner , Sebastian Andrzej Siewior , Andy Lutomirski , Fenghua Yu , Barret Rhoden , Davidlohr Bueso , "Jason A . Donenfeld" , Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng , Chih-En Lin Subject: [PATCH v3 04/14] mm/rmap: Break COW PTE in rmap walking Date: Tue, 20 Dec 2022 15:27:33 +0800 Message-Id: <20221220072743.3039060-5-shiyn.lin@gmail.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221220072743.3039060-1-shiyn.lin@gmail.com> References: <20221220072743.3039060-1-shiyn.lin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some of the features (unmap, migrate, device exclusive, mkclean, etc) might modify the pte entry via rmap. Add a new page vma mapped walk flag, PVMW_BREAK_COW_PTE, to indicate the rmap walking to break COW PTE. Signed-off-by: Chih-En Lin --- include/linux/rmap.h | 2 ++ mm/migrate.c | 3 ++- mm/page_vma_mapped.c | 2 ++ mm/rmap.c | 12 +++++++----- mm/vmscan.c | 7 ++++++- 5 files changed, 19 insertions(+), 7 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index bd3504d11b155..d0f07e5519736 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -368,6 +368,8 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, #define PVMW_SYNC (1 << 0) /* Look for migration entries rather than present PTEs */ #define PVMW_MIGRATION (1 << 1) +/* Break COW-ed PTE during walking */ +#define PVMW_BREAK_COW_PTE (1 << 2) struct page_vma_mapped_walk { unsigned long pfn; diff --git a/mm/migrate.c b/mm/migrate.c index dff333593a8ae..a4be7e04c9b09 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -174,7 +174,8 @@ void putback_movable_pages(struct list_head *l) static bool remove_migration_pte(struct folio *folio, struct vm_area_struct *vma, unsigned long addr, void *old) { - DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, PVMW_SYNC | PVMW_MIGRATION); + DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, + PVMW_SYNC | PVMW_MIGRATION | PVMW_BREAK_COW_PTE); while (page_vma_mapped_walk(&pvmw)) { rmap_t rmap_flags = RMAP_NONE; diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 93e13fc17d3cb..5dfc9236dc505 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -251,6 +251,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) step_forward(pvmw, PMD_SIZE); continue; } + if (pvmw->flags & PVMW_BREAK_COW_PTE) + break_cow_pte(vma, pvmw->pmd, pvmw->address); if (!map_pte(pvmw)) goto next_pte; this_pte: diff --git a/mm/rmap.c b/mm/rmap.c index 2ec925e5fa6a9..b1b7dcbd498be 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -807,7 +807,8 @@ static bool folio_referenced_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *arg) { struct folio_referenced_arg *pra = arg; - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); + /* it will clear the entry, so we should break COW PTE. */ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); int referenced = 0; while (page_vma_mapped_walk(&pvmw)) { @@ -1012,7 +1013,8 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *arg) { - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, + PVMW_SYNC | PVMW_BREAK_COW_PTE); int *cleaned = arg; *cleaned += page_vma_mkclean_one(&pvmw); @@ -1471,7 +1473,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *arg) { struct mm_struct *mm = vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); pte_t pteval; struct page *subpage; bool anon_exclusive, ret = true; @@ -1842,7 +1844,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *arg) { struct mm_struct *mm = vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); pte_t pteval; struct page *subpage; bool anon_exclusive, ret = true; @@ -2195,7 +2197,7 @@ static bool page_make_device_exclusive_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *priv) { struct mm_struct *mm = vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); struct make_exclusive_args *args = priv; pte_t pteval; struct page *subpage; diff --git a/mm/vmscan.c b/mm/vmscan.c index 026199c047e0e..980d2056adfd1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1781,6 +1781,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, } } + /* + * Break COW PTE since checking the reference + * of folio might modify the PTE. + */ if (!ignore_references) references = folio_check_references(folio, sc); @@ -1864,7 +1868,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* * The folio is mapped into the page tables of one or more - * processes. Try to unmap it here. + * processes. Try to unmap it here. Also, since it will write + * to the page tables, break COW PTE if they are. */ if (folio_mapped(folio)) { enum ttu_flags flags = TTU_BATCH_FLUSH; -- 2.37.3