From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FB19C43445 for ; Wed, 15 Jul 2020 17:09:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E6E9420663 for ; Wed, 15 Jul 2020 17:09:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6E9420663 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0BC348D001B; Wed, 15 Jul 2020 13:09:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 06A708D0002; Wed, 15 Jul 2020 13:09:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2B068D001B; Wed, 15 Jul 2020 13:09:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id BFAFA8D0002 for ; Wed, 15 Jul 2020 13:09:50 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7E43D181AC9BF for ; Wed, 15 Jul 2020 17:09:50 +0000 (UTC) X-FDA: 77040947340.01.son23_071054426efb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 644EC600021358FC for ; Wed, 15 Jul 2020 17:09:50 +0000 (UTC) X-HE-Tag: son23_071054426efb X-Filterd-Recvd-Size: 5562 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 17:09:49 +0000 (UTC) Received: from localhost.localdomain (unknown [95.146.230.158]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3E0092065E; Wed, 15 Jul 2020 17:09:47 +0000 (UTC) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, Will Deacon , Dave P Martin , Vincenzo Frascino , Szabolcs Nagy , Kevin Brodsky , Andrey Konovalov , Peter Collingbourne , Andrew Morton , Steven Price Subject: [PATCH v7 25/29] mm: Add arch hooks for saving/restoring tags Date: Wed, 15 Jul 2020 18:08:40 +0100 Message-Id: <20200715170844.30064-26-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200715170844.30064-1-catalin.marinas@arm.com> References: <20200715170844.30064-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 644EC600021358FC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steven Price Arm's Memory Tagging Extension (MTE) adds some metadata (tags) to every physical page, when swapping pages out to disk it is necessary to save these tags, and later restore them when reading the pages back. Add some hooks along with dummy implementations to enable the arch code to handle this. Three new hooks are added to the swap code: * arch_prepare_to_swap() and * arch_swap_invalidate_page() / arch_swap_invalidate_area(). One new hook is added to shmem: * arch_swap_restore() Signed-off-by: Steven Price [catalin.marinas@arm.com: add unlock_page() on the error path] [catalin.marinas@arm.com: dropped the _tags suffix] Signed-off-by: Catalin Marinas Acked-by: Andrew Morton --- Notes: v6: - Added comment on where the arch code should define the overriding macros (asm/pgtable.h). - Dropped _tags suffix from arch_swap_restore_tags(). =20 New in v4. include/linux/pgtable.h | 28 ++++++++++++++++++++++++++++ mm/page_io.c | 10 ++++++++++ mm/shmem.c | 6 ++++++ mm/swapfile.c | 2 ++ 4 files changed, 46 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 56c1e8eb7bb0..cbe775d3446f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -631,6 +631,34 @@ static inline int arch_unmap_one(struct mm_struct *m= m, } #endif =20 +/* + * Allow architectures to preserve additional metadata associated with + * swapped-out pages. The corresponding __HAVE_ARCH_SWAP_* macros and fu= nction + * prototypes must be defined in the arch-specific asm/pgtable.h file. + */ +#ifndef __HAVE_ARCH_PREPARE_TO_SWAP +static inline int arch_prepare_to_swap(struct page *page) +{ + return 0; +} +#endif + +#ifndef __HAVE_ARCH_SWAP_INVALIDATE +static inline void arch_swap_invalidate_page(int type, pgoff_t offset) +{ +} + +static inline void arch_swap_invalidate_area(int type) +{ +} +#endif + +#ifndef __HAVE_ARCH_SWAP_RESTORE +static inline void arch_swap_restore(swp_entry_t entry, struct page *pag= e) +{ +} +#endif + #ifndef __HAVE_ARCH_PGD_OFFSET_GATE #define pgd_offset_gate(mm, addr) pgd_offset(mm, addr) #endif diff --git a/mm/page_io.c b/mm/page_io.c index e8726f3e3820..9f3835161002 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -252,6 +252,16 @@ int swap_writepage(struct page *page, struct writeba= ck_control *wbc) unlock_page(page); goto out; } + /* + * Arch code may have to preserve more data than just the page + * contents, e.g. memory tags. + */ + ret =3D arch_prepare_to_swap(page); + if (ret) { + set_page_dirty(page); + unlock_page(page); + goto out; + } if (frontswap_store(page) =3D=3D 0) { set_page_writeback(page); unlock_page(page); diff --git a/mm/shmem.c b/mm/shmem.c index dacee627dae6..66024b1884c1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1673,6 +1673,12 @@ static int shmem_swapin_page(struct inode *inode, = pgoff_t index, } wait_on_page_writeback(page); =20 + /* + * Some architectures may have to restore extra metadata to the + * physical page after reading from swap. + */ + arch_swap_restore(swap, page); + if (shmem_should_replace_page(page, gfp)) { error =3D shmem_replace_page(&page, gfp, info, index); if (error) diff --git a/mm/swapfile.c b/mm/swapfile.c index 987276c557d1..b7a3ed45e606 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -716,6 +716,7 @@ static void swap_range_free(struct swap_info_struct *= si, unsigned long offset, else swap_slot_free_notify =3D NULL; while (offset <=3D end) { + arch_swap_invalidate_page(si->type, offset); frontswap_invalidate_page(si->type, offset); if (swap_slot_free_notify) swap_slot_free_notify(si->bdev, offset); @@ -2675,6 +2676,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, speci= alfile) frontswap_map =3D frontswap_map_get(p); spin_unlock(&p->lock); spin_unlock(&swap_lock); + arch_swap_invalidate_area(p->type); frontswap_invalidate_area(p->type); frontswap_map_set(p, NULL); mutex_unlock(&swapon_mutex);