From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLACK autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C51AC433E1 for ; Wed, 19 Aug 2020 02:55:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 796BC207BB for ; Wed, 19 Aug 2020 02:55:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597805705; bh=eB2kJPxOaUtv4eAPuUiOINTFfR1XImW2DulQ9VsJa/s=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=q4TGHnwqQbDuZ+dYEUm/8z+EF6gxpxfnlkFOUbq7jSHD9On9+uZVJ9xSckzLWwhmF RMamIJVy1M/y3n3D0tMRlzNXJNgAj2+PvZ5eWGAHw9aaQ56N50JgVX3qJVAF1M2J2D INpnKGy0Hat/qhQe6GRKati3IYEPZohN7W74M5Ng= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726985AbgHSCzE (ORCPT ); Tue, 18 Aug 2020 22:55:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:55462 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727021AbgHSCzD (ORCPT ); Tue, 18 Aug 2020 22:55:03 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0D33F2078B; Wed, 19 Aug 2020 02:55:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597805702; bh=eB2kJPxOaUtv4eAPuUiOINTFfR1XImW2DulQ9VsJa/s=; h=Date:From:To:Subject:In-Reply-To:From; b=k984e6ppZL3a2F19mkNVVBehH4BplHrNfzuy4zpzgDdzlI6iqCYU+qvuoqnF6XOGV ZS0hkzwuYz4OysPFN5wCHOxUhdmq6IW+vsqa0uA5gn+uRScuZ155X3Fcpz8CZLwKJE LnQq2uX09nQ3R64qH5p3wr8HZarvnFm7s+7f2XbQ= Date: Tue, 18 Aug 2020 19:55:01 -0700 From: Andrew Morton To: bhe@redhat.com, cai@lca.pw, david@redhat.com, jasowang@redhat.com, mhocko@suse.com, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, mst@redhat.com, pankaj.gupta.linux@gmail.com, rppt@kernel.org Subject: + mm-page_alloc-tweak-comments-in-has_unmovable_pages.patch added to -mm tree Message-ID: <20200819025501.gJhZlolfC%akpm@linux-foundation.org> In-Reply-To: <20200814172939.55d6d80b6e21e4241f1ee1f3@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: mm/page_alloc: tweak comments in has_unmovable_pages() has been added to the -mm tree. Its filename is mm-page_alloc-tweak-comments-in-has_unmovable_pages.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-tweak-comments-in-has_unmovable_pages.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-tweak-comments-in-has_unmovable_pages.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: David Hildenbrand Subject: mm/page_alloc: tweak comments in has_unmovable_pages() Patch series "mm / virtio-mem: support ZONE_MOVABLE", v5. When introducing virtio-mem, the semantics of ZONE_MOVABLE were rather unclear, which is why we special-cased ZONE_MOVABLE such that partially plugged blocks would never end up in ZONE_MOVABLE. Now that the semantics are much clearer (and are documented in patch #6), let's support partially plugged memory blocks in ZONE_MOVABLE, allowing partially plugged memory blocks to be online to ZONE_MOVABLE and also unplugging from such memory blocks. This avoids surprises when onlining of memory blocks suddenly fails, just because they are not completely populated by virtio-mem (yet). This is especially helpful for testing, but also paves the way for virtio-mem optimizations, allowing more memory to get reliably unplugged. Cleanup has_unmovable_pages() and set_migratetype_isolate(), providing better documentation of how ZONE_MOVABLE interacts with different kind of unmovable pages (memory offlining vs. alloc_contig_range()). This patch (of 6): Let's move the split comment regarding bootmem allocations and memory holes, especially in the context of ZONE_MOVABLE, to the PageReserved() check. Link: http://lkml.kernel.org/r/20200816125333.7434-1-david@redhat.com Link: http://lkml.kernel.org/r/20200816125333.7434-2-david@redhat.com Signed-off-by: David Hildenbrand Reviewed-by: Baoquan He Cc: Michal Hocko Cc: Michael S. Tsirkin Cc: Mike Kravetz Cc: Pankaj Gupta Cc: Jason Wang Cc: Mike Rapoport Cc: Qian Cai Signed-off-by: Andrew Morton --- mm/page_alloc.c | 22 ++++++---------------- 1 file changed, 6 insertions(+), 16 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-tweak-comments-in-has_unmovable_pages +++ a/mm/page_alloc.c @@ -8219,14 +8219,6 @@ struct page *has_unmovable_pages(struct unsigned long iter = 0; unsigned long pfn = page_to_pfn(page); - /* - * TODO we could make this much more efficient by not checking every - * page in the range if we know all of them are in MOVABLE_ZONE and - * that the movable zone guarantees that pages are migratable but - * the later is not the case right now unfortunatelly. E.g. movablecore - * can still lead to having bootmem allocations in zone_movable. - */ - if (is_migrate_cma_page(page)) { /* * CMA allocations (alloc_contig_range) really need to mark @@ -8245,6 +8237,12 @@ struct page *has_unmovable_pages(struct page = pfn_to_page(pfn + iter); + /* + * Both, bootmem allocations and memory holes are marked + * PG_reserved and are unmovable. We can even have unmovable + * allocations inside ZONE_MOVABLE, for example when + * specifying "movablecore". + */ if (PageReserved(page)) return page; @@ -8318,14 +8316,6 @@ struct page *has_unmovable_pages(struct * it. But now, memory offline itself doesn't call * shrink_node_slabs() and it still to be fixed. */ - /* - * If the page is not RAM, page_count()should be 0. - * we don't need more check. This is an _used_ not-movable page. - * - * The problematic thing here is PG_reserved pages. PG_reserved - * is set to both of a memory hole page and a _used_ kernel - * page at boot. - */ return page; } return NULL; _ Patches currently in -mm which might be from david@redhat.com are mm-page_alloc-tweak-comments-in-has_unmovable_pages.patch mm-page_isolation-exit-early-when-pageblock-is-isolated-in-set_migratetype_isolate.patch mm-page_isolation-drop-warn_on_once-in-set_migratetype_isolate.patch mm-page_isolation-cleanup-set_migratetype_isolate.patch virtio-mem-dont-special-case-zone_movable.patch mm-document-semantics-of-zone_movable.patch