From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25B4EC433E1 for ; Tue, 4 Aug 2020 01:49:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E9AC12177B for ; Tue, 4 Aug 2020 01:49:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="eiRVZLer" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E9AC12177B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lca.pw Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 07AAE8D011E; Mon, 3 Aug 2020 21:49:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02CE28D0081; Mon, 3 Aug 2020 21:49:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5C0D8D011E; Mon, 3 Aug 2020 21:49:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id CE5948D0081 for ; Mon, 3 Aug 2020 21:49:47 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 68024180AD804 for ; Tue, 4 Aug 2020 01:49:47 +0000 (UTC) X-FDA: 77111204814.22.sand29_321092226fa2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 3625C18038E71 for ; Tue, 4 Aug 2020 01:49:47 +0000 (UTC) X-HE-Tag: sand29_321092226fa2 X-Filterd-Recvd-Size: 10811 Received: from mail-qv1-f67.google.com (mail-qv1-f67.google.com [209.85.219.67]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 01:49:46 +0000 (UTC) Received: by mail-qv1-f67.google.com with SMTP id t6so13408494qvw.1 for ; Mon, 03 Aug 2020 18:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=p8aoxtBKjpKezj0Xn3lJSuTf8xWXnxhNxmcUMANtU70=; b=eiRVZLerPMoajkB+s8SV6MJOuEFepX37u9gdFSD7tTDSLZCH6UiCaGomdDPi4aH+D4 I1YJl2aQAVY1McIOo03pBEerWULuBbfafKLAPPb1elQHUmlLs+hTy5XxZrUvitCtXzoq X2kHhYBzK91YtZVg1TKj9PdcGhtE4fpfI9a/RAhkmHSN1fOp/0HYGuAzy4wFXHaNH7j/ /ZpSwCZyMPEFQr0v2R5ARdKuOQnBFYfI52lPariD/SiSGIhrbkB+4m4WiC8aw4PXbaGI mF4TZzCyeH5vKBbcflbXPIeD1oa7DjVCFN0xe2MAZybfe9CD6eEBhEsXOpghVfpyINvD jsHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=p8aoxtBKjpKezj0Xn3lJSuTf8xWXnxhNxmcUMANtU70=; b=SAHjfJarRBbTXPImlvjh2kUMnZkgVzFAVmcvvabMJdf6XwGzCXvEVxbK+FD+uX0l2h Gl3OU7rcXkIoid0NXEi2/tcyl/PiMC7eGdoadlaXitYFYLitk413sisTZs+NkeNrkxRj 54XYVT1nbvwomTI8vdqavFSwrU5gVF1SCEsb56jiL5kLH4eru0Vb/KX6FBCRN57ZIwgo RZLnTPbcCdX4/pSA4DFtufk4ljyjd7PHAmRLni3dJfagA+S8YhImCPQasnrhZZkr1Ucn 8/bBJTwr8Y6xIxojF591i4shaYlt35Tj75hpy33rglqURAobFqQ7bCsPlAj3vTX5sD9S 8New== X-Gm-Message-State: AOAM533a1mO5ekZVw9SrdM5zCBcyNyGzw27xfANWTtGdeMRX3ozepjsi Q03P5QXRJWWH/98OrdsZLs8Low== X-Google-Smtp-Source: ABdhPJz+q1p2Tcfz3PzNsRFqQCW+MOXpujrRVRj8ngTDyJOShCooC24Z+RT39b2eYMvg0FzdS7QXIA== X-Received: by 2002:a0c:8f12:: with SMTP id z18mr4130541qvd.153.1596505785818; Mon, 03 Aug 2020 18:49:45 -0700 (PDT) Received: from lca.pw (pool-71-184-117-43.bstnma.fios.verizon.net. [71.184.117.43]) by smtp.gmail.com with ESMTPSA id 78sm22222716qke.81.2020.08.03.18.49.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Aug 2020 18:49:45 -0700 (PDT) Date: Mon, 3 Aug 2020 21:49:42 -0400 From: Qian Cai To: HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+j44CA55u05LmfKQ==?= Cc: "nao.horiguchi@gmail.com" , "linux-mm@kvack.org" , "mhocko@kernel.org" , "akpm@linux-foundation.org" , "mike.kravetz@oracle.com" , "osalvador@suse.de" , "tony.luck@intel.com" , "david@redhat.com" , "aneesh.kumar@linux.vnet.ibm.com" , "zeil@yandex-team.ru" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v5 00/16] HWPOISON: soft offline rework Message-ID: <20200804014942.GC8894@lca.pw> References: <20200731122112.11263-1-nao.horiguchi@gmail.com> <20200803190709.GB8894@lca.pw> <20200804011644.GA25028@hori.linux.bs1.fc.nec.co.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20200804011644.GA25028@hori.linux.bs1.fc.nec.co.jp> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 3625C18038E71 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 04, 2020 at 01:16:45AM +0000, HORIGUCHI NAOYA(=E5=A0=80=E5=8F= =A3 =E7=9B=B4=E4=B9=9F) wrote: > On Mon, Aug 03, 2020 at 03:07:09PM -0400, Qian Cai wrote: > > On Fri, Jul 31, 2020 at 12:20:56PM +0000, nao.horiguchi@gmail.com wro= te: > > > This patchset is the latest version of soft offline rework patchset > > > targetted for v5.9. > > >=20 > > > Main focus of this series is to stabilize soft offline. Historical= ly soft > > > offlined pages have suffered from racy conditions because PageHWPoi= son is > > > used to a little too aggressively, which (directly or indirectly) i= nvades > > > other mm code which cares little about hwpoison. This results in u= nexpected > > > behavior or kernel panic, which is very far from soft offline's "do= not > > > disturb userspace or other kernel component" policy. > > >=20 > > > Main point of this change set is to contain target page "via buddy = allocator", > > > where we first free the target page as we do for normal pages, and = remove > > > from buddy only when we confirm that it reaches free list. There is= surely > > > race window of page allocation, but that's fine because someone rea= lly want > > > that page and the page is still working, so soft offline can happil= y give up. > > >=20 > > > v4 from Oscar tries to handle the race around reallocation, but tha= t part > > > seems still work in progress, so I decide to separate it for change= s into > > > v5.9. Thank you for your contribution, Oscar. > > >=20 > > > The issue reported by Qian Cai is fixed by patch 16/16. > > >=20 > > > This patchset is based on v5.8-rc7-mmotm-2020-07-27-18-18, but I ap= plied > > > this series after reverting previous version. > > > Maybe https://github.com/Naoya-Horiguchi/linux/commits/soft-offline= -rework.v5 > > > shows what I did more precisely. > > >=20 > > > Any other comment/suggestion/help would be appreciated. > >=20 > > There is another issue with this patchset (with and without the patch= [1]). > >=20 > > [1] https://lore.kernel.org/lkml/20200803133657.GA13307@hori.linux.bs= 1.fc.nec.co.jp/ > >=20 > > Arm64 using 512M-size hugepages starts to fail allocations prematurel= y. > >=20 > > # ./random 1 > > - start: migrate_huge_offline > > - use NUMA nodes 0,1. > > - mmap and free 2147483648 bytes hugepages on node 0 > > - mmap and free 2147483648 bytes hugepages on node 1 > > madvise: Cannot allocate memory > >=20 > > [ 284.388061][ T3706] soft offline: 0x956000: hugepage isolation fai= led: 0, page count 2, type 17ffff80001000e (referenced|uptodate|dirty|hea= d) > > [ 284.400777][ T3706] Soft offlining pfn 0x8e000 at process virtual = address 0xffff80000000 > > [ 284.893412][ T3706] Soft offlining pfn 0x8a000 at process virtual = address 0xffff60000000 > > [ 284.901539][ T3706] soft offline: 0x8a000: hugepage isolation fail= ed: 0, page count 2, type 7ffff80001000e (referenced|uptodate|dirty|head) > > [ 284.914129][ T3706] Soft offlining pfn 0x8c000 at process virtual = address 0xffff80000000 > > [ 285.433497][ T3706] Soft offlining pfn 0x88000 at process virtual = address 0xffff60000000 > > [ 285.720377][ T3706] Soft offlining pfn 0x8a000 at process virtual = address 0xffff80000000 > > [ 286.281620][ T3706] Soft offlining pfn 0xa000 at process virtual a= ddress 0xffff60000000 > > [ 286.290065][ T3706] soft offline: 0xa000: hugepage migration faile= d -12, type 7ffff80001000e (referenced|uptodate|dirty|head) >=20 > I think that this is due to the lack of contiguous memory. > This test program iterates soft offlining many times for hugepages, > so finally one page in every 512MB will be removed from buddy, then we > can't allocate hugepage any more even if we have enough free pages. > This is not good for heavy hugepage users, but that should be intended. >=20 > It seems that random.c calls madvise(MADV_SOFT_OFFLINE) for 2 hugepages= , > and iterates it 1000 (=3D=3DNR_LOOP) times, so if the system doesn't ha= ve > enough memory to cover the range of 2000 hugepages (1000GB in the Arm64 > system), this ENOMEM should reproduce as expected. Well, each iteration will mmap/munmap, so there should be no leaking.=20 https://gitlab.com/cailca/linux-mm/-/blob/master/random.c#L376 It also seem to me madvise(MADV_SOFT_OFFLINE) does start to fragment memo= ry somehow, because after this "madvise: Cannot allocate memory" happened, I immediately checked /proc/meminfo and then found no hugepage usage at all= . >=20 > >=20 > > Reverting this patchset and its dependency patchset [2] (reverting th= e > > dependency alone did not help) fixed it, >=20 > But it's still not clear to me why this was not visible before this > patchset, so I need more check for it. >=20 > Thanks, > Naoya Horiguchi >=20 > >=20 > > # ./random 1 > > - start: migrate_huge_offline > > - use NUMA nodes 0,1. > > - mmap and free 2147483648 bytes hugepages on node 0 > > - mmap and free 2147483648 bytes hugepages on node 1 > > - pass: mmap_offline_node_huge > >=20 > > [2] https://lore.kernel.org/linux-mm/1594622517-20681-1-git-send-emai= l-iamjoonsoo.kim@lge.com/=20 > >=20 > > >=20 > > > Thanks, > > > Naoya Horiguchi > > > --- > > > Previous versions: > > > v1: https://lore.kernel.org/linux-mm/1541746035-13408-1-git-send-= email-n-horiguchi@ah.jp.nec.com/ > > > v2: https://lore.kernel.org/linux-mm/20191017142123.24245-1-osalv= ador@suse.de/ > > > v3: https://lore.kernel.org/linux-mm/20200624150137.7052-1-nao.ho= riguchi@gmail.com/ > > > v4: https://lore.kernel.org/linux-mm/20200716123810.25292-1-osalv= ador@suse.de/ > > > --- > > > Summary: > > >=20 > > > Naoya Horiguchi (8): > > > mm,hwpoison: cleanup unused PageHuge() check > > > mm, hwpoison: remove recalculating hpage > > > mm,madvise: call soft_offline_page() without MF_COUNT_INCREAS= ED > > > mm,hwpoison-inject: don't pin for hwpoison_filter > > > mm,hwpoison: remove MF_COUNT_INCREASED > > > mm,hwpoison: remove flag argument from soft offline functions > > > mm,hwpoison: introduce MF_MSG_UNSPLIT_THP > > > mm,hwpoison: double-check page count in __get_any_page() > > >=20 > > > Oscar Salvador (8): > > > mm,madvise: Refactor madvise_inject_error > > > mm,hwpoison: Un-export get_hwpoison_page and make it static > > > mm,hwpoison: Kill put_hwpoison_page > > > mm,hwpoison: Unify THP handling for hard and soft offline > > > mm,hwpoison: Rework soft offline for free pages > > > mm,hwpoison: Rework soft offline for in-use pages > > > mm,hwpoison: Refactor soft_offline_huge_page and __soft_offli= ne_page > > > mm,hwpoison: Return 0 if the page is already poisoned in soft= -offline > > >=20 > > > drivers/base/memory.c | 2 +- > > > include/linux/mm.h | 12 +- > > > include/linux/page-flags.h | 6 +- > > > include/ras/ras_event.h | 3 + > > > mm/hwpoison-inject.c | 18 +-- > > > mm/madvise.c | 39 +++--- > > > mm/memory-failure.c | 334 ++++++++++++++++++++-------------= ------------ > > > mm/migrate.c | 11 +- > > > mm/page_alloc.c | 60 ++++++-- > > > 9 files changed, 233 insertions(+), 252 deletions(-) > >=20